var/home/core/zuul-output/0000755000175000017500000000000015137160522014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015137212576015502 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000403417315137212373020266 0ustar corecore}ikubelet.log]o[=r+BrvEZƐȒ!ɦ[M cSy-Hgf1p'Ziyg4AM:~*U糿h[.|yoo\n6ӟ~z;ܾ}ziC4%Ϳf\ϘקbzuV6/?;|Yݿ|y+vŊ7 >=*EbqXgnxh{nۯSa3WկD*%(Ϗ_϶^ +SI211zysw߹l;] Hs %yqf2=;OO pzM.v=F|;F|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kzn!#Šcv cXk?`G`&Rכ߿YKSGo/wHF6":=3Ȑ 3xԝd){Ts}cZ%BdAR/#-on#D"ެrFg4, 0ʡPBU[fi;dIu' HAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J?_kPg\W#N%8'# $9b"r>B)K.(^$0^@hH9%!20Jm>*JR*kl|+"NGC_#a7]d]sJg;;>Yp׫,w`ɚGOd$ecw^~7EpQС3>GCS[Y?D?GS awoap+W9f%$P[4D2LG1bЛ]\s΍ic-ܕ4+ަ^,w7[A9/vb֜}>|TX rdTs>RDPhإek-*듌D[5/l2_nH[֫yTNʹ<ws~^B.ǣ''ASGOEȧ`hmsJU # DuT%ZPr_W_ŏPv`9 C|iRj)OCMituuۀ~s t*;o7sp$3nC|]|[>ӸUKޥg9b2oII"9 1"6Dkſ~K=rMY QOΨN!㞊=4U^Z/ QB?q3En.اeIʗ"X#gZ+Xk?povRm8~ꩮ$b@nxh!/tt{: CºC{ 8Ҿm[o ~zlѹ`f_" J9w4t}7NG GGG]ҡgc⌝M b/Ζlpah E ur C&`XR BcEhs;H?)M"뛲@.Cs*H _0:t\ڡc0SAA\cη}or|Eto®3uO0T-i+tǭQI%Q$SiJ. 9F[L1c!zG|k{kEu+Q & "> 3J?5OͩDH/:;P_j(PJ'ήYXF'~GہU5$o&~ayXCJ68?%;b8&RgX2qBMoN {:!%Piocej_H!CE|ɦSWPKi0>,A==lM9Ɍm4ެjC d-saܺCY "D^&M){_>:i V4nQi1h$Zb)ŠȃAݢCj|<~eQWQ!q/pCTSqQyN,᳌qpMl)QpL F2G ѭj( ہO r::1v|ћrۉt٦K7˽`!i:ګPSPٔ@5;ȕo}PkڪH9' |":", 1Ҫ^9 -lg&:2JC!Mjܽw.Ci`du듎q+;C'16FgVlWaaB)"F,u@30YQg˾_^Ҋŏ#_f^ TD=VAKNl4Kš4GScѦa0 J ()ƾm'p/\խX\=z,Mw˭x.}yWZ,).Y͆/h7n%PAU?/,z_jx܍>>o낿kg{9𚃚p9wo#z5A׋yTJ$KOL-aP+;;%+_6'Sr|@2nQ{aK|bjܒ^(מO80$QxBcX; yCùXz!bm5uA߉X})0/>nNNXYt\oP@gV ]cӰJ:q';E=-dZB4']a.QO:#'6RE'E3 */HAYk%C6Θ%|5u=kkN2{#FEc* A>{avdt)8|mg嶚TN7,TEVOy4%-Lq6d@CYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tu7}opY.G]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Sr>Ӽ]\ hSQƗLwfm#Y~!%rpWMEWMjbn(ek~iQ)à[h2yrOP! I,($F{ձ7*Oy 6EK( EF #31J8mN .TTF9㕴/5~RxCe,&v3,JE- ZF5%X ?A?gU3U;,ד1v6s푻jÝ$ >8ȕ$eZ1q}lrCy u)xF$Z83Ec罋}[εUX%>}< ݳln"sv&{b%^AAoۺ(I#hKD:Bߩ#蘈f=9oN*.Ѓ M#JC1J~]ps/9܎ms4gZY-07`-Id,9õ԰t+-b[uemNi_󈛥^g+!SKq<>78NBx;c4<ニ)H .Pd^cR^p_G+E--ۥ_F]a|v@|3p%kzh|k*BBRib\J3Yn|뇱[FfP%M:<`pz?]6laz5`ZQs{>3ư_o%oU׆]YLz_s߭AF'is^_&uUm$[[5HI4QCZ5!N&D[uiXk&2Bg&Ս7_/6v_cd=d@eU XyX2z>g8:.⺻h()&nO5YE\1t7aSyFxPV19 ĕi%K"IcB j>Pm[E[^OHmmU̸nG pHKZ{{Qo}i¿Xc\]e1e,5`te.5Hhao<[50wMUF􀍠PW矬3yb.63>NKnJۦ^4*rB쑓:5Ǧ٨C.1`mU]+\+܁<lW Gϸ}^w'̅dk  C 7fbU{3Se[s %'!6%d+, Z`ͲH-nမ^WbPFtOfD]c9\w;ea~~{:Vm >|WAꞭi`HbIãE{%&4]Ig Wjoru ݜmKnZ<X; ۢ( nx K8.|DXb +*598;w)zp:̊~;͞)6vnM!N5Cu!8Wq/`FUwWAֻ,Qu W@ Fi:K [Av*_918]a:pmQ&'ᚡmi@ zF(n&P;)_]µ!doR0`pl`~9Fk[ٺ+4Hhao-jϸ?x7F ѡZ af#rjcl ^2B│x@Bq"M/lja\b݃af LnU*P(8W[U6WX ZoѶSH:K:%Qvl\bSQp#YI$A@EEdT+w';'A7㢢V"+aQ33^ќz9Ӂn|Vo|6 8~J[,o%l%!%tyN~L`'ž>eVVޖ~;BLv[n|viPjbMeO?!hEfޮ])4 ?KN1o<]0Bg9lldXuT ʑ!Iu2ʌnB5*<^I^~G;Ja߄bJȌsK+D"̽E"Icƀ "NqK$2$ Ri ?2,ᙌEK@-V3ʱd:/4Kwm2uZm8pnglVj!p2֬uT[QyB402|2d5K: `Bcz|YץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z5N<  yH QveJ=WhwS]֫l"]Јzg6eze;\Mdv!E]?CLC4ʍ@1Ssc;l?ߨG~oB(ъ{zZJ }z&OF wkߓG9!1u8^drKcJBxF&+62,b.-Z*qqdX>$'dW<qIE2Ľ)5kJҼMЌ DR3csf6rRSr[I߽ogCc;S5ׂdKZ=M3դ#F;SYƘK`K<<ƛ GnjMU.APf\M*t*vg]xo{:l[n=`smFQµtxx7/G%g!&^=SxDNew(æ*m3D Bo.hI"!A6:uQզ}@j=Mo<}nYUw1Xw:]e/sm lˣaVۤkĨdԖ)RtS2 "E I"{;ōCb{yex&Td >@).p$`XKxnX~E膂Og\IGֻq@wY)aL5^1 W9&3JW(7b ?(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?__|&q̑0dd4>vk 60D _o~[[w3ckpkpLNa ^j 5*<&}kˢmqvۗj=<Tr=[ a^؃ È(<^ٽcZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓI+mj(^>c/"ɭex^k$# $V :]PGszyH(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>_ XC.l.;oX]}:>3K0R|WD\hnZm֏op};ԫ^(fL}0/E>ƥN7OQ.8[ʔh,5tW> l/`I~>|灹mQ$>N |gZ ͜IH[RNOMTq~g d0/0Љ!yB.hH׽;}VLGp3I#8'xal&Ȑc$ d7?K6xAH1H#:f _tŒ^ hgiNas*@K{7tH*t쬆Ny497ͩ KVsVokwW&4*H'\ d$]Vmr달v9dB.bq:__xW|1=6 R3y^ E#LB ZaZd1,]ןkznxtK|v+`VZ3JϧC^|/{ś}r3 >6׳oƄ%VDSWn 0,qh! E-Z%ܹpU:&&fX+EǬ.ťqpNZܗÅxjsD|[,_4EqgMƒK6f/FXJRF>i XʽAQGwG%mgo 恤hˍJ_SgskwI\t`ﶘ080ƱQŀllKX@116fqo>NrU Ѣ9*|ãeeH7.z!<7zG4p9tV|̢T`˖E ;;,tTaIUle*$!>*mBA2,gJIn_kSz)JC]?X(OPJS3.}clݨ{e!MB,cB߮4af祋,1/_xq=fBRO0P'֫-kbM6Apw,GO2}MGK'#+սE^dˋf6Y bQEz}eҏnr_ ^O^W zw~Ȳ=sXअy{E|\΋"?|NKfֱn !-p^:ZYUv`Ƌ-v|u>r,8.7uO`c Nc0%Ն R C%_ EV a"҅4 |T!DdǍ- .™5,V:;[g./0 +v䤗dWF >:֓[@ QPltsHtQ$J==O!;*>ohǖVa[|E7e0ϕ9Uyzg%pg/cc6RS`HFLЩ LkJu\!`0);Sak$Vfp~C%YdE6c>1ƕ (0W4Q>@>lWN"^ X5G-nm.8B>NOI[31,j2 Ce |M>8l WIf|\q4|UkC.gr`˱Lϰ} xr.~l-ɩu_Drd31V_ѺUib0/ %IYhq ҕ  O UA!wY~ -`%Űb`\mS38W1`vOF7/.C!Pu&Jm l?Q>}O+D7 P=x@`0ʿ26a>d Bqε^a'NԋsI`Yu.7v$Rt)Ag:ݙyX|HkX cU82IP qgzkX=>׻K߉J%E92' ]qҙ%rXgs+"sc9| ]>T]"JرWBΌ-zJS-~y30G@U#=h7) ^EUB Q:>9W΀çM{?`c`uRljצXr:l`T~IQg\Ѝpgu#QH! ,/3`~eB|C1Yg~ؼ/5I7w9I}qww}U~7뭱ԏ,}e7]ukDn`jSlQ7DžHa/EU^IpYWW兹Q7WyTz|nˇ _qˍ[!;n ^b k[);ng]ȶM_u)O_xV hx h[K2kـ`b duhq[..cS'5YO@˒ӓdcY'HAKq^$8`b $1r Qz?ۧ1ZM/G+qYcYl YhD$kt_TId E$dS:֢̆ ?GЅ'JƖ'ZXO݇'kJՂU086\h%1GK(Yn% ']Q; Gd:!gI-XEmkF}:~0}4t3Qf5xd\hEB-} |q*ȃThLj'sQ %؇Gk`F;Sl\h)5؈x2Ld="KԦ:EVewN ًS9d#$*u>>I#lX9vW !&H2kVyKZt<cm^] bCD6b&>9VE7e4p +{&g߷2KY,`Wf1_ܑMYٚ'`ySc4ΔV`nI+ƳC6;җ2ct"*5S}t)eNqǪP@o`co ˎ<عLۀG\ 7۶+q|YRiĹ zm/bcK3;=,7}RqT vvFI O0]&5uKMf#pDTk6yi*cem:y0W|1u CWL;oG^\ X5.aRߦ[_Vs? Ž^A12JQ̛XL:OEUپOY>WK-uP0\8"M: /P4Qz~j3 .-8NJ|!N9/|a|>lX9T ҇t~T1=UF"t; 8-1I|2L+)WȱL˿ˍ-038D*0-)ZyT13`tTnm|Yhi+lQ&Z!֨řoҒ"HKX 6„=z{Ҍ5+P1;ڇ6UNE@Uo/>8.fgW]kY0Cgcu6/!_Ɩ} ' Ў3)X<seWfSv!ؒRKfs%(1Lhrٵ L.] s?I,HCԢ[b C-lLG+@_$c%* _jR|\:dc5u= A@kUc\ǔz;M>dUN/aFRĦ@x؂ǀ$6%}N^ \mQ!%8j0dUo=rh>*YȴU3Q,̸*E%59sTzɟڮ2kg ۱wEUD3uKrr&"B:p`\E)j<).R&#ÃecE,dp"nPS 44 Q8ZƈKnnJei+^z '3JDbSK;*uБ:hF ѹ @˿ޗ~7g9| hLXULi7.1-Qk%ƩJ4^=ple;u.6vQe UZAl *^Vif]>HUd6ƕ̽=T/se+ϙK$S`hnOcE(Tcr!:8UL | 8 !t Q7jk=nn7J0ܽ0{GGL'_So^ʮL_'s%eU+U+ȳlX6}i@djӃfb -u-w~ r}plK;ֽ=nlmuo[`wdй d:[mS%uTڪ?>={2])|Ը>U{s]^l`+ ja^9c5~nZjA|ЩJs Va[~ۗ#rri# zLdMl?6o AMҪ1Ez&I2Wwߎ|7.sW\zk﯊溺^TW^T\*6eqr/^T77WNZ7F_}-򲺺VWQ77V\_v?9?"Th $LqQjiXMlk1=VzpO֠24hf 1hi D{q:v%̈#v^nBi~MefZF >:/?Ac 1M'I`22؆DT!/j璓P åiw@wgRCsT~$U>ceއE)BI>UljO|Ty$ŋrwOtZ7$ "i 8U 7bSem'k?I+/SczF_/k ``g&8q;MvS⚷EoIɒce(frp&Ůu 8;J"2<?Q"cUL6K@`6E)*ILi>g~jPpq'0.-qrC%X% d@ OIى/NݰB?4 7B^_>gie22Cˑ1rbaG~lN,,ǐvd/PLÏl%Ba2w,C ;JL;uo<dE+y:#j,{=r~+*4/_Lw=CAQ4 D40O\da`'Yd2xAmǮY^hp?#iZF$0KO7u2Opr1ߕK򗤯iBRNomlC{fyw w( YD~wS^̖kK%&~SȐ3|cp>NMttʴtvW3tó^3hliX֣^?b@x#b8t<>m\)BOں'iqJ7:O|&*P^ 1Pj!!O |fv^LfU %Q _³*}oNngd"гA>aQ+c0m Xl@%"Uua*ޖ~1@˻q$u|R3˜$'\RX7@uu.L _=s!Uh.~Gpxk]5 tY,d)[e 뛼&b_cA=<mu(aS*qY`k Bj.%OR<{[/!I7YMdܽ\}}I,M*R310.8>ea$0~X=pLO4,_9nx<8$q~_Ha[.8ϳpqmK3B*mcr"!W_oŸKpAoxX 8rKYנɿ/q]6;e$(&$iM4ƶSYQ9?MF`zL%|dm dKs0Yoxꋦ]AS}Bw]AĊlfP­[F:bQTb,v{d۟+$[| YF<&ۘ3q8@1>X1p4|U_'t!{H_YƓgoOI,{&dMV'B@=yEN2BJR$8~ֳo10W %!^ V+#`7:õS0+'LJ˯]SG 95ubr5 OV ]HL7tKr[myWMqZ䤵 g [N[Y6 #G1bbBB6VUpF21a|hjŷGUpSU{3gcGo8!IP$9]4%|p!a x>x~X׼g= VQ-Owu;~.Iq ]3g%PG8"Fm 65AŋnQLn,<uծ$"lonO2}4)Y͈c+10ɯO޽h>mwD3$ZU<4 _4C]7G(bQ$}AK;%B<%qGN5pUD_L"ϯ"TDxlZ;^ѐI^}-$v:oH|mWhEPEh! P<-^YQ 7u'g"OgXРJCo$Ygx_F*+|v:=_m{|Խ`;-)I4-y5;ت?J2}*jy# h1 /QZi"G H,qT<)Łe?i|c!Rr/ <3r=Jt{W@8E.:`a>nHەˁgaG?= 4@殏@2veD=U,Ee;S/&}T;ݦAU_syU!v4?0TBWWƂL'o>>wnMO\ЖT8n+tkrl4MGw-/DFzoFzvXV\x:|Zd;&!fsAp;8SUC{ej=q֖dXpFjlTrn,ΫMTmƭCZ%wOѡT^Oq9E1]oGUICt*ظ}_/;DtN ]גI,UkM+*bi+2u=+ɲm+>~ xѳC4Jy].roYNtݎÓW(,AJjgj>놱L+HKeUk݀Q=Dg"oCKQa*Db o}2\4I2DRm[u>x}nŭ:=@ UH}aܛq|=c4"w%E0rsympYҶ2l`s]C"MЩ6Uv[52EKWQlx jl'j |Ǧd,ﻼW}־amn>UjT):˶*YmxV5qN2["IXWZ?\ʡ8nVv-*e"O'ʘZ =̱2f ˒D ZBIrUfLs,$~6ɳjq#3{KæJY{lmg11! d̵Fgu#Lz*u2v,@b2؏QZ(@>00b$7pArBqf YPށ\:+C"3q~ԛˬ1~ e10}kL2W1pp3Gp,8ys0rg2(rQ;ց(2$y:;apsC 8Ú+(_>JnfuRU>tp*}>?xOO \Ŝ;a^O|y%zu:0us`'M 3'&qNM"sA@a}]5=HR+P K迶@pl >nC1Bd93.wj1D(ѫu_#0x/k}XF@`Mb ^"1 !G0Da dO{F5 O4z~y[Fy@@F~ 2E=o5`wћ]XGbD.6 .v@% >(ӏ+" qPPjwއbi)E'{KЫ@=I b_;x AD}@B]ke 5BWQ3@:}4#(WN 罩rxcrEr? %0H|_^8Ԃj"ovDa3(ཉ x(hoB-snDʛTkRcIޗ} n` - AV$F|5YjAIaУԫWթc&4H~طa")i!g> $.c6ӋN{cA)ܠ&LW1(NO5ܵDt; 2ʝET%F_q#P:T+~hJneudql\F#٤Kx#4΃#ns"/32(UgP &gABȢ?rBQЇf;L,}L/DJ˕ Dk&Q0/r XzsT@(p p[Rωu0)`HUʸRԤQR.,z;rY;3Bk2mfYUKHP3 $ (`w|Cfs뤺3EԺZBQE}~<)*2)I5{J_o;SI`O-sqn?avVB둝Cnuy5Rf9AQ Vɼp(%N|Ef7K׾v^T-bv]%&#着yUYO4Aop䉋,HH]\_uq7 "s$Mf68) lS-"S źZ,u<01 8<oC}"rA5],W_d *E:'.P ;`^ Z8P\>uYdR^{Vl\}"= ǡcx*9{prH)_tșٮ@޴;/В-W'\t"[#}.ED~5MQzF"<[Y3dب &#C&v~'--a>S M;ތ xZVnaٵ>\${x~[E5 kjukQQF04Hp e>'[ZtQϰE֎6g#:.j-23ߥæsp|K>m6t-`(u㫻DҩP.|^3X?NiYݡ(&"89A'\UsN>rt87yGl|ח Z}K8aL̏ ?P 0d{. FC txyc8u#?niS ;QF#/؅lcCmi< PvYʃQGR ^<"I3P'e,׶mV"5kgc,|ϯa0v+`V3T=Mwk+Uo У\8(v4( tw{#xn|Ǎ N#(={=wy"M`/aGz a`0zfQDi DsHy_j7K)Y&qXp- B!U-A g l>\MQe緸 M8&v8>[bSy ^bߏl),i]o..^jU4<9Y86@;I7M h?zm}q4y]Ý8,уT^lP|'}"Ǫm%ꭳݥqb9\KP$`fv.vrZZu T :pf<[s~Ob l}RT=3q mzY ж}, hE^x^~?*sjo JG(}"tB:lBʞH(ہPwPwB uGDB v ۝PyzO$ہPPB GDB v 4؝yO$4؁ppB GhDB-bCqo2uV݄̈Tw͏(X<> \ P㩰D.+tÌ0 r e!ܓZv3&p#ҭ{`=[5Є*Ohw4p;6'х7h+v^}O`x,;$`^uN~.f[!}:%"o)̨ojӠj1ȯLJ핃DrޅP#Q=O}$墘#ELG"yUC|n{e!3<ZjAi3c"[kv!M*vf[ݘOޯFu#Ct  +)G邌`( 2,"ʿYx+RYOZJbPO@0;|m6c-ͫdbc$ QBk/@O6<2ru+&p#A!,IY?*_Re"ʁ5aRpC$zQ먘$)h$A?  S g!L9 L ;A8cf< Imz2K0h6|u1vg`MvZ8N?X·@{!M7`G T<*".lvvc OE=d,,gM9O |]rkig 'sIgȡ:)Y2`&8,hkc =v~1Ak,z=0h3y+^/2@ܬhi۸ LP_b:ra`# 4qD,GωSnz8Ed̟ <5iB4IvK0{Sغ2} }8TԾ::gEGH wbh ~_'4=UT. 1bu4Y+Ei{vtҁo{*)ko@XdIGՁljP^lժt'.\e؜@h2o$Bi Xϙ^ ×-g׏Lfi;If ah851OA講0yL$88JnraY_ jI2ۄi 1GKG\궥>?(y#ܔխ-01w6f%cHpBU>/nwza Ak30Q2)vӺ0U}670M V85v8e"4A]U<3u 5Neñ3ZjpjGŝ&eИZiy}!ItVzMЬL*2C |9OP rIgt ^Gϲ(\mȔB, zKx[ wY&RD,=t,6TyGU-TFݭ&/&ÄNd 0AFߎC9t}3-jWQ^^јVii.K%W#B}}ь2ȯ?%m *fgg& 8{"Oy"!{rV՟2WDt=梻1UӈP7vCP. ]渁A6zm8$`M?պE@~CrZU$m8h,-/+!4Nƙ.NVʃd͢_ ]Ѫc*ϤuYJ-aT_R sUk"2ԢEd !KBC:۳ćɈ&4 mV=QvdNE-)?lfҌ {WeRm(i@H/ނJhT,p=ǵ0Anr7 !l]SÏv,Wv_ZC8qb{ܵOsxK3TF~慮VSȻ6W!Huadrb{>%Z)dk0=/I+zj­E|UwOj.[&0YҵFc[ofHb{1''=ޙDQkuLtZ(-#S)h ebİB%cO:SZpZUL" -OI.Y7ǫY$G(VyLGZSuaeGÈe(vI#u NNri̖(}h%e #i v*\3v}mcP."5Wr̘/Y :{ Twa$9}\kn^zH+Dpӻk9P`_)xRVe-kU\m!:ׇW6|<)L\ ym=a5{YzrRu!Mwv_.W[Էdw3;鿦!Oht>X8M~ Z?,f}Ko︹p3m\Ët[')&PK=n o=$^4')7 BuHsz9?#Lk+w70nyEJ )MY9 ?4~h:dE<]Y4br BaϚkcc|dE!k-X8xp{01z֎-rb#gMZ>8l.V+ d1o|ްyi|$qp^pO਄8wTċV(vNeyv[(F U!*>-\"Ze3pFmu^!B$6 +I=x݁Ǧuc!zߧ!>dѽ)aӦHptXGi9+ęנPZ 7|#!=~o>Y28ҏ5F1,%i DhZ$!c^"9Zc<>'b$`((L432OLD pQ#BL*9yLC+L]|±~1mSh G]4%qE$Qɋrgjwf6R7o˝%F^R',zv(oK A4k.m`<8Eʹeh ]1XK"M=l{D3obd pc3^ |ZX;0{E([rM92˧G `g, F҉x}8m6p=#>ecgrB@pp"ɾa)x7u#`SAiHi{Ν9bS s#я*T9X:XŚ͋(y.t#m`yb(0Dq]{őkYFŌE15}d1ݧ ^Ip o3bԓݬ ٤2␜5I'8>o̊Z /!̧yܗ@t=̼Wg |)nig}m]sf8tI̪8YRZr"EvcZ9@K2am?P>|nw+oofil+SȠ^>CEs6P5d"xu{7_8SJoz"6@7\( L(BʄNeXN$z2|@Йk^7JQ捻F= JGw9ȥOƧv  6Υ=a< 3!PsDgth vt;*ᴁ*x Gl9pmoH/0 <7ű=T~6w䡗Ґ磻VYi8jJ/]u`[ϕV n+@3q)WâB5C∐t?<59|zLt1;/#2x7Y^TVLE],/c-]!Zd7F҉ >.VBඝڌVU$ LK >U>A FxmLrG25=#Zsy$"}$b3,;;BL?r^KNvmGesX19b,N5LJSȤ$ey5&?^\Ԇ#S̆3oLynJ蒝V:Go8 b"aQ5Tcwƃ? )=Fϗw3RWq#u ޕet,T2G Eb$M|͜|2i`AQϼ#f*&JV8SIJc[Rk}AH:mh4Hpsgşz0oO,Sw_roIp9?jIp L{} cO\'9ŒP${O,O(W T%=lN7B uO.T *wqc;[Fi$>3x ƹ ;CukNe\nE߶y͚o;ή/s4#uMuYWGso f}yH^ rnxr!<㖰g$>׶ ojHs'K-vjV& XUCcòJ%6a$2|~\g/k?7Stf(5zܹ$e C0+gSDrZwQ0{2_q+b1g6m3di7 dy7'/ஂ,#Rl 䉑t|ZF!XYI3fI֧2:L#F [xP@7nw}~Tm \#ST\;wy>~~^{ Vo gDN*V ʾ u4u,)˜G?x׹o Mw6 't^'_H4^]֬1Y8Mټ84 15hplkuB}H4_N(]! fF/As:,8Bc"%&:v$JU?BY.mba΂VEQD'<Tpa֔BN ^0~#8L?Iku/$`47Q*t%psXFerr^(?zY;plP]N4 Zc b1bj9#9_ ujs.PB;8]}ոwhTc-38ɦיT]">]aUhp4,GR{mœhIIz>xg0G3ɜv0ѯbtșQAyd聼i O]TC ,.71Bk})9VTlcVm%c1N7GoHptW}~AV䢀' aijl! If&LJi2IsKwG3ɘI.A)G5|S#Amj9& 'K>2#$zwO]Ӌi5ےS^;xD0&BO8^IByakv"u`2+?b強ﮐZ '~]&pvN 04zٴ 7UzjVir'iCO'axN8L82WIf@A@]螝#>xWqE7h\L&{x|yppe+1Tsć6{y?q=P8X'Ď=P"8x7V_w=-{tn[" D \f<+)Yۿ_cL.dž};p?9Up@u\\YT x{@(;Ԩ@̛U<ȫ¾UזVmr朜Ղs0z9ݹ'}wRhɛ室Ȭ_וsь{yzX a9Fq~ܳqZWxgػҏfWρ0Bjm%vխ}WhH*q. ˟VV7+W.')3c <~T^uQ3 ֱ]j8c$fmVdNR;/ɼp\6UEs{χ7\e?K-u8grH|b|x߁xM|L#jϜd%s t#b"')NFBkaԗ yK[ML}LKu@5^ﷻqT%*Ƣ=bO%=lJk?; Ҵ]z@3&=9nx[ͥj4PyQFyMh*,j3]%H70Z\2I12ްzB{#5bŌf;mXrݪeȩn2 ȯe iMa M=A1jj#ZA&La# <*&7Q>e!U;nIh"Q-:X\$=o ?bR%^/PRVGRgjҋ/OpleYi_,hzGPAՋ1*S%[9iw䶨l d&į*/Km:8javAXM2W'WEN:*`:!hd@f'0cXPeyOoW?γk*j盛J=HYfw C_.%, (Q̽@q8\P??wэ+';>y&]4xo ܾ1)<f͆|̹n1`DiY!LYpw0TƉEɉu=$|ywKTXg!g)Eܷr`t0S1[.6ǃi1 ~FkT77]nY-CuD`-N2s,3;tײR5|?z TWi#%8b 9#ȮKBz?{0_ni~ VVs9O%۸Z-U`'?:gz$$Xy@DoV?XjC_}!h^O~{žV` _Qź%x5<| 'j/Y'{ȺKԟHަx**@'? 0p/l:Q|{x!5C,)@4k,߃{=i4kDj 7.Iҳ=;[ Jݎ&Njw|ydW㳓 9hrYᦓd8ɬi("ծݯ~\G| ZV^Ta?9=0o0T.S7͎$/GdBio%|W} ."K 砈~}+Dp>#4Xˁq8%ߞﺕ2!rV%]O<ʢ{8LL7Hn!+]T pYPi !gBԎ R*'`Bd4ۯjW7[._Ӹ|BK>SP-J-UӴX#Vkԁ+oẓ(1 MSIy#e+Ylta @=yCHIv1I; Hp8,=wK7 ^_ }v8~>9}-Lwc^VLST P<23">#ٽ0G3J,+| ^Q棇}羟Pr%1\k@˲qY/*7 &/)aF$4~7qJeWqlKJvRqɞw)?6[DZseQ<\U`\ߪ㳓B)(95J7&j.#ep2*BmK3))SW#x_ߝY\I;1_FRO{wfyHB/3+A0b; M% Èhݙ|)xvThtwfBl[J4_D%~e~d{Ztb"Jgj$xؿ.szf񼍦nчWApd%Kh}Iվ.ǯ/vQq|B+Qh0k?_\'k5/'. xاV*5`f¤M@32渤FuB,695}- }o6K\ +b2LmqUh7*w_h^} (h8 xyƃ2U@ Õ2QiXBh͈ӎXK(+ԓ@*Es ,G`)eҳ0yH1XgH=5@1 l"C ΂2ח$7Xm3,4N% 2&)SY&唧Zk`yK&Z#Ү[#[55Z"Ze`"irZjZrsD#\=Чqj6"eYn,=Gp=0hVj|fa|7 MN rs Haxt_@ Bk7U$={S}߫ϫ_N|[x3??3n۫_@^v~:= {+{7Ew 5|O ]qm?*h8Y{F 4bsDNL-tLQYj!ޏf ][ڼW_J^E9 Ļ;?Fk0 |^5'/-QK걈q% ˕ cf_&P@+C\rV^.%Ѫ,?_wRZ2%%nheB_< )6ʹdPU13DHg݊Wpn4 žjY&;LRd0/Im*JdĤXcyCl7|)M`"OH R!IJNbS$RI4D sr.j𭳑6c$`?4RsF[Ź9#6D;I31joc5nCiJJvOٻ޶qlW^` lA--,p'EATcԱ3Nf#rb'bd[VV;;D#yxCanX3%@o8ӎZc>V}ZYIc C`an_ \!KwapYKw!hY%NUx%AX `Rh&I(2&ig`SH dsTBR@(!qTXH(IxkTi^h.iLddHH,Q.Mqg$XBu If:bL3pƵK4wb$2&Z!?1A #.4Td.r*f &*RIDXR!9{6 $G.1^\T\ %|h ЌwvFz )xMZĜ69U&-Pk(-ىva+=y*B2XYPydE@^Ds۾(RE#R<vwHCw=@ai]w9$TbRH9x-A8QYua5qԚ34Ha>RPlj) &%-ݱkF3d\0 @Q 7X&2P7B9-1P"M4>MHr,MXd\)RG5A 8#w5$YT&Ϣ8<AU` ɕn$aLLXR.iIH0Z$ 5֨*U=''q.pf4KB5ˆ6EC`y;p,i^Jap9k DD2 syeȀe= .;A[ $%oϡXƉUN4E24Qib%B0Όt'4.QFSzό j*Q{ bRu)&QLKDl&a5$P sٌM%t{r:w2n$V )X LpAk)D"k6>=tS%)As+,s64e<Ӊ%J9b9Ueaqiͦ-*ǴDkn}vPRdwrRo-N 3\*3Cj*xl0BX)kEBZ E[X 1.oډ3q%nUJܦm<_\ԸBv1IX"B]IWNum%\dWè'%BB\[^o%v LQ`7(A/ Zqmi\(yiN_*!p7~{ `0y =* -pAYP`W(:@a=jYOp+%XSsNZ<`L5{z/WS]˷#*) B_``fIQ&G(pCiw.QRޤ(ÚJlnIuSLL.YL3U;]F2">ђ!~ӊhikR4 ^i(9Oz+ aH\i|7;N9M7奙Z5_fe|m>/`s/j>wU=)S'C9ЬS(x$y`}Y# .n6P(@looMPR>Y N`lP00O^(IDVTFx >V3%Gъ'IR_W 1+7&ѨY|% .o6=-T1EjB8)~^Ji%VCBcEk+żOygC!OxIFԃ~@w/IUA}EJCZxl&D>>CQ櫋'%>e(3 m{>HS$w1`)ft\k/A[33=l#LNJI])HGo+`o[6hh>C,h8>w嶍~(ȱRU_D?@nFE #Šzlqx^dq:8\8E/T7zPR?߼{]7uXy~^=[+fJ3`붇Ư?wV726eZ`˳bݼg}ϏePvү۫ tBG=lJ Zrծj\k}M5L8*ff} k;6 PYKapy+AhaL)brIJw9v롶odԚa\rzSbu]iqb>JUZe{_j27GWчbIx HV:rG`MzX&#ep&P(5y ;t 9~ˬ^OY /u,dq27/)yF^M Y˵s_}Xaߗk1 D ".' J:F.AXbl_ũnT ^CMÝ4uBlQc穄CJ ]LTX9z@SwϺ-E*/YF־]}U ܅0vt4=ZG+s>@l.%ްys7YX?R#ǏO,{8ktZ,G0D}-XC:.^cZ8dzEp|(7&dh_=HD~Cr`Ɠdy{z/CG0AYSg?ސ_30:}JK'pTJun)ܚh¾oH1JOih_Vڬ#8~o`:ۧ2k`>K_po%v d{n䓑/\] cYʸدُ[CeWI/z[jDdMކS9l}zF*w :Gïz-ܯE>b^#+|%r(:$KJm^/{Oc+=nq̌r-7; -n2+Yh|:ΗDջb/ɱpY2߿t<ҳ^!YYCNapO(|Om%iq$fXKT)bӄ^KT557:ъ2Ok E^>íB4L Ŕ$fFXAٔ0&k,)zF>aD{g)np0b"'<Ѡt9IRFT$F'p {XyzVdw"נz+0 V?2~c)\*<اX e-xz뛅IM/. fS ٨R+, LXjAky,gՁ1aGxN&};9ih/)Ib<3+Yx*WOw/ ͫ[xK>s~ڝ>(̇pg$VH9Cӕ%4/(]MYӆCOn''ړ0_| oP tv#En~6;- L>85㼸Ŭv(6'$w=_8!c_ K.&SOo/&[(6SۥTm,B2,WvO]'U|z(p!dL YlX[))qJ7d(29RMJ)}̃wثkLe*Ox>mΏdZe$-zjuo`Yo8ۋͷ M2y}xm\~o_8!.d%`,$`Pkb\/y8lep]x7VO..?[KὙ] 郑F6n_fG1o/R+yry$&MWm}+߭AsO]8W=ΧRu.%nh?CF6==K˚\(?TŞTŧ{|\߇&c3Z,FL]GNf4++zt~m-˾5"|)"ID@KF}ߖ/1}Avӫh*eĠ;L>?kb >#f=YZjh J$$FLD;y`w>jr ܤTgQy]YtưX&)A)NVJI NfK Z<{ܻ>%pXkTq┉8SIIGɞ%+OdI(É1T)SqIF4 ,)K3% d5)ٽ2S\A%*0(x>2Zr+9oYCQ*QTM!J΅jOp]تwy0;IKPu|QvGZv]?m7S<]y)PťQP4vJ;ƳWY5BTТ< AZeի#A h]:NtFӖqdDrt\ \W.p \Wt+Wjqo&A4"EZyN#y`u #8hqϷpajJ)&YTRu5Et5]tM]Et5]t"T.d]tM]9*t5]tM]Eל"UFӈi]1.Z]Et5c<q蚎t5]tMG.$BRWWU9#EϊT T80bV.<4PHa``Gmu0 sQFXpqIc>e .`6${\@IT3ӓ+Z,բ LTE?EbsֈWu\3}Iֽ'|3Ν^cY_daƿak{^'&]Ņq +`u{uUyaLza~%fgN炻\;Q t@\U﵄Xs*Asn.N`CTož?|ʏc}P:eT򍾐h}wdxnV0ŏ_?x>nԸw@뫛?_fWKS5J?U/%0fz4_8.n,/_߾&۶OzqÎVͧfy;)/ڸ$gF/i쐬'o)Mszg5l}&ZeF+)F6l'+hJ_mT:dbdFu_Zr∜t-}4pgj*WXZßbPSsٲ~7k`&L컿n@_:[m?nobf;/!]K'_(2rN%g-_\3Ԃ f=lk=JҝuU޷ayasjSkR)yhrVװo_EQG>#wtuQ5Xڃʆ-⧛%㊪ Gw8C.sa\K")A6bۂZ|w߼G]fl4+ڜ3E+{bψ̌"eJ0ڋtAтG>r9~1l63 ʗe:/Ϙ8ωdT"١aBG>}̮a]\VL<ɋXk)L aLH!c9G rPȭo·q f94D2FI,ϵν[̗-lq5&ח ?,`%EN2)N2UA:d hA# A ]g>>ϋvKCxZ0uX)*0f_F^[K8:PIpAmA!ԚyyoIw9Ci ]Pt ݰ RHpAd̓`Q%ɔG >ӥ@8`e [PlE{z]kDJo)":bD=9gkmxs~wASWBGR($\ɂDB#oy&ʢȉ/]IX)$WH }" AY~czS\S:+MȄ!JyV$3dn9S,hGNof7F_/MufJ?b}Wm_x[*?܅RکQFJ0X|wss<X} yQ˲!S9F pmAT(X@\d)aBc>mա#7 jDŽӫn3>h 'u59c "3b; sZ,<.0[c*e_-{JgƩ-Q3Ma~Zn/Lo>bz7咄R("M<ڳ<2֛RHos}" btD9eq dvӲmͳ,v~ ~4[,-Yir!c9G rPȝyPZ"=YA9НLJ/zp|!#RgF)>YI>]=FŅ`H&ʜ}d= >kPTGt?頹aTK gSƆ! oEJ>uYJx7sօ>u^XI.a;dQ-byLgӸ'sU!-%V3IFeBd̂!TAh_ȇB>q GV (+4I<&Ht 'TI:d4=QC!ѽ5/fpUu ٤<{"eyaS1[m;:PLiYb@y-ai4 V$-AF|6(򡸺R'̈>D_PQ8AaY!5f3AjNʨs<1'>l?t2*G2fV;OU| nW]YV ~QL%cܛmbėE>[;UG]O^Z_<xVX5H}iu ?|s/!k[bYl>mتpdDg׷j C0Nwܻ`Zjb7:Dw@ c&?rFG6n!$-\ t}ҢI~~ J[:#<AħΑLT=1]mxh%+&۝ckCܥa̠k]kvYj~J;Aٟ`tK81ދPay0ژtuUpЧ?q16j7K-o|LSpq-z3=TufXZ;aVѐ.+ S˻VS$RBp'#}ޗyTŐ\tv VX`WxBgqIG|(ͪyeiN׼}?CYT: 9M*;/=6Ū2!!(ۄyC#TuDQ(bl>B5)@6"tף>pq?ЕZ WBL8ˢCFcp1F/VW˻ Zf=Ea2'O q T:ec~sF ˥"!@Bb` P':2&&(򡐛olop#DܟpqD\=T5'eݿ2_Jkr e hA#9, _^˗=FL;3Ϋـ8=u r-k3.n3) GͰ<4aр˨!-I˅1휑fۖǍA{ntSJ~nV3ſ&ߞchNzۜԞ"-B"wSjME{7H&z#cl[1RXtifsvdz=weG4Kwun޽FۿcﶪaLqe4s c~ZuQF(bFaLQDz5ZQ'szh4A=f!"wn@—j"9D%v=I4ʭNk6~EO**D³❪V-w %1+RuB?>? &Y)IbOQ-OѻZEO@'`Z5Ivvc*9У @aҴ]E-MxYrWpWpWVtJW(KbQdT$\RZQv )S%LqBYU1r#xUA:юIE;D۟kbI5)0 ˜_r ߷N4My vL^;"K.ɟ2Ro>`O%`Cx&8YƧu^\-G'0Ɋ.ᓆlkub5Dl1I6+ƒM#v]ԗEŐB8m5c~ _J wȘoyvg9.,r C౎fvȝ%ӂ,oyѮsНe }sg0|=ά/45J3Rίc2j #q2f[Dxs uwL|Q=L --aW- j,1Rj}dV0룡ȇ)L=~ 7x'|6]G~m{E Ch[DZ%g `E~E-Ѯ":QK9꺮CFS]E>r]£c&ΈS1teI,VR2 .ӛZF9GrPdo=b1P1&x)). ΀bEx^hb!xbȇBnX(W~׉ڪ ݻΨtM'oV.\QrJ#G ?=ɖ8l!r)Rݶ>WyQDY\@ vXr?h Br,oקcʀ.sGܕD-76U3Gv>_~gyMb^,<ܯݢPlnPtfh߬ ,?> T8kBJk? '4CL "ueJ{O))ޗ*rTwĘ[:A=|XҘbO2^q Aluv?)i>E˅CUڑF:,H| ֡:kSE=ȓ{ZL"YPK{?(Af'h%7&DV+3쇫IX!oqEHZbSk/x]Iǵ~@nG@=Iv,߾Q=F?ql<˟wFROU5NJjߟXs GU,4r1\-M\F yes'nn`;uYA;MڵaS$1ơxE/H`>Nm c3<އٛƾav/^ń::+ 8䕶&Bc6!(Z܆vս\:5aiK\F܀a\'U6̚LxBt&xSpȅ7/AxonedM-#Iy4l41"IUIYY:}Sm1NYm_W9O1;w&xFik$X`4,hޓDSQ9곷Ool<Ӭ}FTAcӨl_X]CJ-)RmIQ)ZxMs&/kxzǫ{zOBADÍQWU5 ѴQ 4qd9G ,#R7Fh3OdvaQSfcxҝq|L >6PxuTjR?[?SP}1n>+Ji[Or.Tˏ.l.EEx_~K5^-o>2CHI)肕P.oYP5g۽ eR6FIG@QD8W֡!M7S.hmNp0C@n fp9Cm͛6gmǠ7Ox3#Ɯca%u"gS%຺.b88Ex? "E"-[_^Ea}LBѲ kpԩWQlp <#;k4 x_7P𯗌,-(Ac A q{qtav4nLtĥ*U&㪄CvИpUd@-yU9kg^T[xo{n_©?oLX=^c﷗RCAcB:v]Y>EDQNٜ% 9 vd%ا \}\T TAXU]aRdUN3NѶ0sUv `)0Ul[*N8d*qTÕrVsxZPQ4D"-E2'p|%G81=1WwjnjNASlOxMyQ͋|Z@j!2RjI8L@bz%Q$SzVE*{n B)Nx$َؕ۫9wSIxY-d~xΛt ǒ8 1ԋKv~(A*+ϰ/7c).eTu=S}gR,/8/@xXYAc;`$5s2vBwx lE`,ص# |k`TEʲVd^5́#_x]V!C4<҂h+ࡶ!Ԃ!,+똑L* ';&U*.&&T,1oաDh\][irs-UgQenu8VKp%A=YD;8j7_ض".5PE_ě=$ޗ|<{X >ƄR.hOh nNhbdHǰ^/ ܓr@ϐ:|U HwИnP}`?#('[g|^AcۖU4WvֳOqS|%۫j:/B\\CK ?)K RI2Us,^zGH2D+GK>_#63KB(Q0ns$ Idv;EߣNHZHe?vٻ;EuQ` .QْT5Sdm wIM| JxfSyd/K,A]淌/ZMxjj/g-;L!2C͖xj՜'QP`E:mn7](/2P8% OQ#G<ʫnkqAE w%zw?M#ݝ}_7S7@t.!]¼+F) GtAPABS7@vwoxYknqANl)IBע1>@5bL#;/?ѝ>Nopg֯x8Q{glNxu6lO#J 8) ] \Zph D|5ut@3Z~]QD  BR;"Qf &(#AUazxα#_l!SS lS$ǔ(#23@TanƻaF@Tp]\ɸťםѪ%ܡ @Txt3( rz6.kJN'm^KT2 ~k"-ItM r2fUMq@Qo=9ӍAa/pOx.tUdODS&\ӞJ+"#gJUX~_.'sӿJ~2SV{jҪk#s~;Uxje*$KnPٲ5n%R' IW9ϑTΡxQt:Pn2i\ z ,}*2]YO[}Y,7Mt'{kݺ3E6Z^cu: ;~I8njK86St/kKX"vT?R 3-gPg8qZ,CQmLU`Gx/ObҢ5gB[QhS!WRrR.n zVݸȿ5z'רfn7w{l!Wy)6Cq9\BSm\43u>Kh,N4kJO/m4cOgl $~9L4uNڈ/d,<6Yp7)R2̞MK duR; =IJ9ñMw($~ 81\*Ѡ5[rP#_B~{Cۃ!)luâaD 4 4L_퐊(։R/?J>>C(ұճ69U@s#Do[hVs[O1ht n"TU$5IsRe@\\I.軮w'"2r'64 3W7q17&$U[P妙|nxi@Q.(ĆQMKeBnm>asAaeA0Ϳ[ *D^@ V*mS[ ӈQ6(E:Zmd}ųy@LF{K\!b&Q71Ʒ'9v7Q9-ܣ4\fmyQHKY /^푨 Cr"sQkB5lqFaVo%l]wF|> rEf0..F{=pyk**2g sZ -UT UrvX[UkLj-ϗ0oa7ain |?Ҧ>o(I?PW~H{lQJ.`E5 5o&<#;U)vQ>yk BX0βw"Rc&0۵3.EWx^*/"7׌me4 3 KczTa8xF\ Y PCdj D^C H]B!#xԌe-z\R1~neǼ@yӺll؁{_^ ʚؐTx8JxkH'89ch|WuO0svт)hCgR'\O^Q9QZjv vPū#4ܷFaP@lk3sRJ0*Dn*<P\=L"7삎ֈF-9 ǯ(NZAH?]@VdKfx䵩dFE8ҬFY"i#D: J\FqHʨigxMFgh samrVDT[ VӜINm9|O1*AcSb6D{{3C/ _щ w(=)[Mt(̜!3f T?g꯰e/(:+dl ~dU9l¯1 4žFapSH0}4O(n~49Cм׌'֤=:.huiO0sm5uszP:V!/2/UѼ Fa "9T& ܄v@a37so&:^S.PphED@J goTieNbǁ3ZFa 0fn_r:9Ai?AW#>W3}YЙ˗٢h%*`Vj!A0s(Ϭ/u\|{/M u齰BwH(VArߍ2Y8#鞗uf&[, ¹L+Ǎ=] */q]q"1d ؓ# g%mӡ9ۣ,:IVPs%a>NWTaa"4 jxik  CY.TNyqH6FyMPrڭ{.#H GY^W1p%)/RUi`6L^/zY-?9|dq p#w:zk=(Q}4or zx*&[cf_~bQ(:H! f,gj>͋YRlf/c߬ٿl6r}fϱ,aV æë]X݇8E8M=o߆,eI ^D)qo_Qa|n9}woZ-2gL_b^|VuO]o'ルry8Y|3kgMrL#{@}gZJ|ǿڭgCsznBVbѣ[PC%,`}[So "z[=C嵅uR/\VBC1ŽS!icP݉  BS䤔Xn"[᷇D m4& p-rZ]wl/g~0`qg|KɂrCJp;g* LXIs͵x-:oܚzg5-qiLq 2{nb+qj;SXx+᭪,0 B|8)yz3_5lk61骚$g[k%59wpi&:jf$Yj ߩJ^Ni)bNQ,p?qBnl0}F Qh7ygbnؚwJh&Pv~{-p{>|t) z*cD& ٞgC#9MgtJdu5A= b_Oc ӝsxpzTqGǑ'ÒCTof/>v!y'(R.!lK9'SHn0SS@EonJQ|߁t{β¬,kga6Y}74DIL7"_S9{/i^(LmK-Ms$\f?~*IkߔL*hE|Q̞ɹ s0:g0/MxR8 32l'33Yyi #s@BZ+n|n"Uً`S"> kcțkcv@jrOv{w&!>wl񂑒nρ_@r # @ [YI\c a1)5<Ӕ D|]q*}PeRƜ aNc,i:ycu:CtjiDPcPͱ#Y~a!>ӘYF)#Jn*7ey]ͣu"T{JN}a?-JQ͋yf~w1&16?b-P9>p"ƃX}n 9/1|v;t$'V5! ">PgjLm"bU?U<a)yƒX2lΠ $Œ`T cU+yv^4UXA~3 ga3DHTA*~_UO={ъs6?bW3L` k#~;ski͗?lnQۓjT>%݊񨁅AG>krߙ҄$ v5JU!CW$r76ou{a;SInji^>%S?(?>_Xv" v{ׂΕ<3Q]ecu"vܠ^.5GKQ?a{~wDcpM}²=}8sP 3O8&oxrZ$ *?oDrZuSZ@N0 v{a ?^Yn`-JXTS^gN^4&io @8c]y/(6ɫn6'H-& js )ޢGD 4&L&ԨHNs2ؼǤ$ 00C I:|߁_5V5o8OEs\myVAժ MvÇ_^U:򡭚}h&3aopbOC4-mM}64[Ԥa3ޝ䳏h*DlDy^ŷjap~ C*J9TP sd(Mvk!a(IIoO퇐omW꿎=ҵCZ=Sl업-=kc86oZVmɡ ~],W]1Ϲkgvs+S`XiM,Dbclgsc3 C dg Cfb3U"a%ˈ"`dṭGH6̬ƣ,%XEԻx&B U4&AlZl4{/Nu\ӘQAXPaulc5 Y s"]r&Ө]Pl4y2珏ޠ5 k`1p5I%sA*\ftXZz/LV܅mNMzO<3i )c!M쫝/b*-?cCV<׌ kW"yu}' Ci8LDLP*SK4:O! ZkNTiT%h~j&0 ~)4WUL犙b 'HgB¬1tYc5Y> N`9'9MQma . /y(] {ox'طRS'?OcJ@4YdryVD-Z I~j&})(R'o&r|S?¨IX ކ!7^"a0..Hw*XAc#CWLe2w2rSB|"%'M߹L.4%MawU͟_V;KnN` R $@m:/^N40[ ;@3T9sk? C1t:q<_Ĺzq͖Ջ֍3T)Ǵ j(.ȗ[)ɼ*YdƋ !fq%R6UY9/Z+q -N(z7' x e`t-νH~JruȱIY+.  Z$d E<"MFkQ P?4Fߞ :zx|O+7PI %7䝀ZkIATB߿ w$cl, Z™"XvM)T&bcΔmS-ɄaL(r炰(7Ň㴲<22b!CR#Dcy.&&19cPif}q!H w:aڀμq$ayN S ✰S wHB| )yx=I*-wQ2GDrLr 9c ! JJCqC6ea=s"B\P.Xe:Yʌ q>/j1N58 [9, q;gR6 #u KYd\RPbȚ!H35$Ha"UUЫ<yH ,h ĽJ`~|`^7V;z~ܯR`&E})E+'( {:~*F2|*&4hIᰶИ[Gf!c(#,!~{b`3YSp f< _0~؝ut?eBDY\ma49LZ4ڇ5!sz~^-DTUۭ1>E[lSuyh@[HgeI\9uژdۖу <\J&A\u JlS0ZA檛1pp+]8@¡;Vz|}}`@Y8vR *օ2QZxc>jr6CYYv" >"Ƙpw.<~T(n*7e`GR[.=b+qzYх0P7\$ƪ "%Hp^j8+1pĐ>*W؆o>@gY\)v8:,"cż ^AoqG=WPSۃҁq:ή5*Ly|2uY_|`51[2&sؼ^`Cyzx,q^mq*Ẓa[rJeZx㸑_i֔~0r``7 b \3M=Q{F~Wy5g=uBE)+Ūb= jUX„: bT; LE~3\8\tQ9*@FT:J {{F#--bBen (d iz'\2>@6r?[iʕyCXŢpzn1[94~vyc3(bϜC-m RkŁG7&?0KGG=0f"m˴ s_?8s2w^Gʹ4%Z9VY >`N\L!s-ѝ %EpX/suohrh֙ s ѩrT3|p@!u9luWcP>领gITUgMxa~p0>S7.';'68g_791Bt?,&T̢ d{RR=$2@vjxzT{ó%Q?%.]a8c,E6^o"UA3.7ؓ6fN}U Edf~p60pӍ(я20GG%;^.*3!۸sӨP Y QG6f縧$DFtYd'H`mzRN墩a%؃nx%Y10sK4U;lQ6/Rvqbz14smAR( `[ZX$K5CΪiK3ˣύ f`ƈhƟ-=T+a,O[10s$ȝ(#w< ݡN\QQ}G%+5@RHchB~ j PA{hf{z">U)0 A l4!5F ;ǔ*s+;Kv:{;)NL m ͞{5Ik՜&.X*E+>6`-ßf?d^.Y?G@(|O};m(oԝwLg+_M_|qN(hN1H՟M s9p . p/hEdN$\{J*lJ*)0e$UP"2G@j2 <{N!rT9Yy#XK١ gm5v*`k{Ɵm4q{n|96V'S*;aDw&tr,e&tDg_|'FMF1882,;, ~yIVOWn v`'%sӐ޸5\= 4Xӊ/gcoB]fC{ŋYn$ކ ٕƣ|ȳ ˢm>&>B J ||?GxzGPH>dˢ類<%S-n1RD$ql>B j|MR7MPIvulQ k0Vixz)"PL4:\ԔfA]Mw. ڀ]s;`b&P\jho޴ه,nȵy#,tHPΨE߼_skJ鋱NX;_gpү[S0W 4q?UХ--f/x핖-UpƋv޶7fU~zt&3ט bb o/7~\[}ge<6sqVՁb:g..WEv6L ,a2]qI/˽ M3h5;-|ߝ$]`|W{ 69u6).?|TsxdWĎ8bR%R[I+--'+_ugAy^S__*;3i"L-Gը%}\g.& Ǔ ]ZHض>ZزvήBY$j=sWδ]Nn#p^ x鬚Sn\gb_Z?؆||szw"<3_]\wL#[n@=ΖIKV["b#/_ĵ~v=SE&o߾kc5ᴞ-G8x>jr%,^!l[⬛lr|Dۯ_ߜo|>ܠC# peNjÃI];@a`,գ7'F,Ig?Mzsa:lU!Fq7MBlZ\f_=+tKp׾ +viL䘚jw13|DN7$g("J )ASH|$I >&c( M A!xHHa>B $^j){GPH^M|Lxc($o:R2 )삲b2a%ugPH SO=#(/i#})$;pKX,֔(FrJB `${GPH" xjRjIP ;%\N4c($*OܛA! xΒ|DŝRg݌QZ`DiiC(B3Ĥ R8GPH(=((F`< { Xc(/I䃻| $Iɉ>B 1>1I}{GPH^Qz7xA_ZifybU3gBD &DPX0-ф#EVw)'}}&1tT咥`2Lq`9t[0{|1pw0\af"$vR  1a fw"w%\$'Fk]4GE4V>Ƣrs}A)[ 5E 771NqĸRNj;Ynmɶk؏&0"I=SB)PBQ1y{GPHVEe["\"#V!MP EAL $i,byGz&8RI )Ln1R7327M$nQ]6G'ֆ5zN  O~5&P R2CwH|$UFFjdW+LZGr<{?@< w \`#Apت lP.rJ*h^p+]`ub}+a/($59έ! :k,%sHb:Ŷθo;=PkƵnU֧p_kK7an5Z{*bxmdC9!r+ xNrdpeYMBfPH(%#4-f).&N.J_zGPHޤnS|?$aSHv>Be֊yj(J)BXߓg-B \%hSé|AڧF+0^J 4DMÞ Ȗ\GE)B J+5|MÝ Q9 ٰO߼ $߂kZ5!ӠhyL,)Y[$֎Έ(uZHZXGBQ$0(;cV!s\nV-y񢼧^oNakAR;NYz+̳ )TNW^crf ,,V j{̳ kAytsqRkf0[ruAae5 )KSǁ| 4mYiĨ  X H4e_  Q 1m }"y VSj(I.PrD5N ssTROg!GPnţ0L\򕻜@v-@ G@1wLwFwwn&fn)m9A{*6\fo/E]n@'/GOxvW e<U>!9͉ZR *2g .Rc\b4 xVDsOS*n']Fa䡥b$fFRҘ*1,ؕQFa?/}| V#ce)`KaeWOVRS P)N@(SbXHgMth<m6PlfhlV$S:0$brq1Y B/>0TZb_~yU )@I.GqАAGyWX5Ε}fQ~ΗIzolY^RzP])Op; XXq0/n k;A̴P$|5mQ|oY8 6beӾdHN&4y=Mvi80Fl[,[HXt.SVb/ 4?gt^i<6[O_{[?Mcn;6zޘCNŌwyv+n1ӎ={2+# juvt γ\3@FNJ7"%>'XLY{A%zЀv!it30 [wg18ƍܨF,k"$H4wGR10Qg_ X~->uva/#%d/nbP+*aog4mw>&yXAv'gY~n j]vZjj{=oOzzo;24Z1;\PkHnFe8:G$_.7k= [*$Y1-*f}1z}?]8v^($f )=[WqrΝgjOA|$mzOWVkemѻ"3x}`hk?Glc-3OM3Xc1$cZh&uLKA8Q6[3KJpRH=Xi ];iV"Dea6 fYxcݼ,X LB܇5o$׉,:SZ07Tn3t#T$' lZOj Wb-j5/1Q:Q.f9a]cX->@ϝ Su]`2 CA<4rʾ*~A~Ar{PYIo9t"ݙ' 65!zӬ?ǔwN.j_AMk&M vllH~E@e[~o.Pd9Ч!A6 g+o=-PދXNߑ?qIcp$n`$Ȏ1n۱=s~$e[cIpʹ%X|V3AJ`SvZ 1ыR/jܖ^E׻ݫN}EE%Cƒ &=M3yi)W.ډ~~bd ~G[’b:sш9-f$YҬ6 ):ڶAW1u11d!A=6˨pf(TbT&L`8X{h0QYHԿ̵,G:ԳҲ@q#VT`z/]r?~a]e},ryi[$gx_(LPc^Nm j_qcK6Esvءb&cpc´%J zL$|=e1 b102J"hxV%+nt?P@>5!T4-,9 x!A+G|KJp޿Wx8eۮO4H 臢&{VL7uq2O덻i9$>$ɠݻ[cd6@vn=wClGTį䊗}OFx@ilXG#:zBk ;W<.'muYVQ|wOkuf)8imOjӛǾqX:Umf_r:뿯w/:}Ec?V 㫗W#zqvJԺ7z.H=1ņdS"-ł2O'HiO2#8;-C2BAj}fº[MO9t^ĝ=zhˮ 1oъ;WJJN1V8dXz3ˉ[PZ3rE/kUhR_H ݧl5b=AҚ=\KZmi p<φ8:Ia?Ljcm|.9J=PZJ@)d8}%s \R=ԧ񷻿d&}Rm~c屷TJwe81Y/`X0߮{[=f+ B5(Ƭ7򮲗#Vpb V蒛g4&ZkgځHbN՞v U9*uT=s;|Bjsԫ rsװ ԅVgHZn0\ʾNҍ]#̤GGɞJ`^e7,+il|235f.v iMGs|RInJ,_! KZa=U،y0)`~9]ֳ\JڹN/ a*qwĝ*qJܩw bUtHS6 R=?e_,X7gaϮ)x\}t/8( H%N`ր#E/XXPqB3O,y,TbT&Lа_p012KiOpOUA}xVaci٣y8ϷVT?~~Ҫk O.-kY2r>V4.gG_WtHb{~ﴢWLٳΠ_}!y]I+mqyly3L!HKq:FV a+M(I 񄡮3E;9X/~Hf(2üiP 0̨6b#\ CciU7S ҎS L3<@JB̅5=bԓLRgD)) X5 5I{m<|sMnf| V Z "1s$TrժTY) bi*{ۋG c´%J zL:Q"2E))`$wW{+[85_"ݝ+).²yyWW2a:̇аrt ,6A(t6gvO%Vq$jh߆[hJ:h'l]$!tv`PF~8k ?;q"lρ:_y-?z0bϵ6YD_b,>ogA3DOd1w[ɛ *Į\(^Iܓ9lyG_QOsߋrO? ߿ F"m gy]q\=?#ə\0J8y%9mW3nEgD9~~K^g( L~ ̭È3P|+~0$hP=I+YH0^6MP0y;nfa->ͦ#G"ȏE{T`]]8ż}QPlt1j@:+6U!^>_2n{/2f=ݪBs;bb1[W,~ P Z]/EcV ;`Q:1M'4tbNLӉi:14gFRc?&ZQӀ2wvNhs1'nd0oo?~7Y-6ūYЃW('byK U0zۗ3ک'کv)i8SOS;o2|*:a*:NS*:NK;NHtyqs[1yFLPc~[{pwε(FHެgA}v&H=#i|?u>ĺb].j]]M u3)CAe8~(DC/@ $VMC!%3<vs|gQvB! RoZ cp8=Ṅ<onVZ5RQ9݃#=jk'pٿepGdx"LqYmxX|Oa˧6=׹ &>.rNnENzI ,L% ,Iރ@J£Ɛngy4 MןG-!zQ'ʷb|{`=F8c H9ɬ6F!Jf!P$Z1Å3`}f\Lad٢ṇS D0X&wpF{fDV{t'Xnr1փ<9" ,ϔT?MFc}aͅ%Qx;|o&_*&{;1<$?*S=y+oև5>=}<6:-ЊlN$~p^*꧋c9NӜ2Ƃwtg@w@?+Yrkd tWX9nۖ-w@2C)p}/in{N #p^~(wrUPc[X/NDMU^Q哿sIFlN EօކL~o*?2|нIxU,zxX+zu^moKER1dB 3 x̰E8H}s}ecr]YG+~`Wwއ=x$c``aGk⺛䲨F.fJb12/#3"sN7$XemCzζj/9vւR2|%gS4a$7wIEn%vBLisۏjz}Q|L'T|L'fGqs+ZEӿ2 Ƥ[b>SC)'T6Z &a Jp&?u? !ЍnǩY=דh(k]Է/ /RWW}zA5yz {qXdϐ< Mђ2\ Ftϧ謢ro`8/CR5jxT>s>`.0 "- T)XYH6x\B?x᱋g%Z"R>RL@SNq飶ěU, ƜZ؊suvіngag;(m@>U42x'ZK(Iv yI"R:"  v {3Ұȅ4*Tpu4HP&$ZPB`#6HJF%Xֱ310BB23uS/IN Y$53ܵc;YӲeMRza(E gy,bյmbu -_(̥Dz[؀X,Qd eJXhsxe D.#1\C'IQcJ#68R'g)Oc.*5X@U{^Z.Le8*P|!4;:'f҃Qa1d§L/br'Kp89{Y?ͻѧa r5-n]sO[H̺ 7>p #ĞUqSzlgJ}&6OGK!b3jg>NLj9+j$>vwwwoOZMwϳ?//W.zN&>gl 4߼y7o߼Ds8VczWn2OS7ƉZvĊv;o~-=wŞsS \]ُzlҮVeksޥ+Ƕ)vĔ)#X2L8So;՟Ef9t5K~Fb oan\TSi1XZR2[7y<3'haFFTȭdݰ ^ veǽuqk > CLwȄY2ˑ{M|J:"fw[ofawRλ!́{Mzٶg'(w|`Q>3 ҁ6d4+44no1Yam̄3Qu,|48=5[qRmWL]͋qv[gSf ; rQԺj9r6v-]ok&h^ݰ]MCx,|>2]xBʎgZ,5 :skO5O냟0jfL19λjmkCj8wŖ;I.k;og`8@xpWɜyu7Z0EDN=:mC>fq $:ʑAWHHlr|-7_5s|-7_;Zpz+$?Senjȕ O iDpE_nk]nkv]nk}UƘVW5@]G72s8*=dJ*4 +ì-0_ KԾ ߂~=+Z-ۆ=1<cB)7jOG73 0 v"#\OP0%EC $nvQ߾ȞH_ v ׫YzA5yz /ȭ}^ާ` AHRVa S띱Vc&ye4zl5ͭ6`a?.#saQ8;>ĐiayO> ̂HKB9U z{+%R 9z-&_SFۅs=b. pvz!)-8e[tM(֢.brK%bdF/cL-lE9d:Ro6Sy֛Xv_&F|uT-D%$@tY$h) Ӂi]IՆ X[f;iXBg `*8S:$( r|-(!\0Gd1mhF%Xֱ310BB23uS/IN Y$53c;YӲeMRza(E Dok^źpx~TJx߶N)mBw&2N}F㧿xY;w> ,i4ZM KilW cRJ&*5 !ZSҋ.~4OfycA*ay 0e6U[F16rꄒޥ[&%f_5Q1R2T@ h01E i+q$%*@ F$ h0|:QWΆ5y%vҕWե:{D\8u 4LxdE-F(6:;&"`$Q2y"ZUF4Q"#D"\ e&V;=\)rsq q z|_̑.i rqST᙭jn53-dRbJ)xPXpV F Ru\E1e5BQFh^! g#v` #Hq 1N 9Q8%T8#" )H)-_+z6ɛѧbݐڣvMfٹ% y=bŗ %~6baZ ,dFH1Фc}1bA&S3A6bVM7:.`U?յfu\j 폩7rJcRꓸx]| INiMy3NqZ'PGN!`'Zo*h2ȷAvD~-Tz!Uxd!R&R/I1тFQ4`Hn-%_>/q{~lpqҞ?WӛjЮѷb]ٝ- ӰeYo+г^>w-3+):kzbJ>0ubCW>{v+=Rrm=pc^%oۆK:=isF0V9Yxtv'c?ͪ&*+#P e8e>{hNwnV\.`<UlϜ++,Jx }fr0ӹj&&jV)eσxzNPKN2z,SnC`W՛rPTwSޖ'-F<&wvg]m {͓T~$v͸?c"XsÛ^dGlC/}tŻ/XF΀B_FEeގ>/M00,~ 3Fd.L1 0v!EgE] \ :L^BlCD%ԦQjc dN8Ze[iV.k^}85*Ĉ F: lp6H ;HysQ 9,O| *똝Xٻƍ, uAf7@f>LLI NG۲HrHJ$R%g5tK|wdQwTՂTo;ݩlaqF-ݗ]?Hғr{9֖ &rÖK I\뭏{ѣ^[5H0L<7ˬc"]yrO?x>4ά^7!/J'cL[GdWM0+'bgWr~^sńFb0r@M| J}hPxTNΦ# ®@ 8G?JGW *]5яd8.4EA( YE.pHq5f!M2*99CyLxB;K 2!\bPC,?Yg)痝V8>w%Ê;S*Yk}ЭZ9iZQ=eR~XGNZn/} ^'8X|/(4*k:`TAsg0#*IPL2}xy>gً%ӧL?xUf$uϤo`BZ! zx܌ݘsCY,CܰΏ"$ܝ)Cƚ^ާCLE1w`lӮ 7B$mY)K|"E}~w|YX "Z) " =fk+ ݼѼ >=A ٪Fʠ(T56£q6f' `_M4Q}5}=-) Ը7eJ3j׷Q#Y̙D{˧u j׉&LR%Vsmu0aC}T'eME ޢE4EZ]ykjg&T-ފMET8e xLOQ˴PBDO3Il=LrW]tEjh^Z#k?;qBCNB3$B"R.ه95dB⟷chLB AHRVa S띱Vc&ye4zl5ͭ2 e~Y`TO5척d+|+; <ߣPu THh3ժvGӵ&޿ݬln]ҝ@k:(B]PCg0 Ŕ|w.{;s57`wmm˻>-7N0not78zM{6&pȷvtNgHs>HT~RU+/3SYNEmu1C=P-vD+z#Y0ce2Eu˧t& ?W{D_ԗ$XwE/v=#<'XVG@ n-/W~2, ęW.` ,HiB,]VBp!!ڻޔ5iV#=m$\.%;_{$,ViǬQFYJy>A kYӿ_y߰vzw'bFſ,K#y+_.XՏڻbsGؖUЧ{P&B;!D!}`Q*C1`?uVXq|ρI7h(>@Y3Osב NcRɜAafXTqXG$d'>=K½oY!KLX)VFa]!HJ[$MRI4sq&'!T!Sc}e.̼P3ئ'`Lr\9"2$̲#lx$v{R˱ۀZوn EęwSqy94E a:>%aK2ENƩe*wtOef'* QF68U!͔0,fvR95yF/6I^ݍ\`âp-"Ǵ0+[Ñ[DKS4ERs֚&XSYR@;wг՟E/2RdB0Fρ?ů}kqv,wrq/[cŠ]0M-3V$9CgRP;3ig0El(#:m1X H ^ ^YR 9tW5)]%9ng5b3~lTL*7}|7$ًT<M\o^[zCcjT B+ *B=lTu+A=+N.)JA#%lB(~#Qh?.)HUDwynzn$ziPh z[ BSJQ8Nml0 @? 1, vbo8ň~oʖxȘKZQmEtCxP0W}቉Blo5t}Hѻ&6|PԽ\>O?'~Yr0NwRmtF2=mT/PxgB6a(ZwY3hWM?(:quÇ>|y6~`d?dӖB` CHmXe^.8eB.æQ(y.ˆ,}߹9_׫T?}>E9ǮvRag zGt9Pg*g-T-ǬCB1ڢE\y0 /zLKJʀ, +/T;dPS. 6+9ґ{ <6V1vI?0WAh%– :5,1H;ÃPo=UF(# 9W\iIEzH;@q$v̓b;0vLB [};J[yuTgڤSlK9gͶfͦ-esi\f<es\f<L\;BL8I2_fv}uWr=ZE.1)X2(%#FSDPJʐ'>QDϐ=CgH !3$zDϐ;X*Azw-^V(ݯ!7?&=8CL VXx&ZaCINkp.z{#/9̔sL(QmGs+:2,"1rbrX+?"D4>~GbJNEj]>d{{J(47nQ,ѩE3*ċПo=LJ%2ŽaR$l!62j$ fNcxj'IJOCA4),,q ]neیЎ-61FFMf si(RFK,y/%W(¡a&y$V@QFYJy>h kYϼpi<'NaoS`R A!f0[.. m-GIx:&p:06\vYC9* p`f:%h8 0UTB6xMj(櫒0[s1*(%F \dփ•2SXqz!^{WAyy$twٻFr$W>bm>0,3>mTZro0eY)ɮ%,l9bxTw A?e؛=u;Vټ۱TwY㛹e!K a2CZ:Vn 'yNhƟBFW˽CM!ܳ!zR##ѽGv'ie~@B y.Od]W `aZ+v97ߛ!, "9R &b8:ȨH [5kRu,?mGO&d2.F./ud`7Hd|Xx| s^D@\̐`䲈WqHqnx}]t쎕Lƾ'LW`IyWvW?n'˴ְvKV`)=~}׀Y*n*n7LC*:eY䴜qlZpP\I%i$ c* d -5)`gץ-,?nJ;:5􎬂]6˖WdnAff逹ՉUAƠip1S#҇j!NQ":4m Ăe_ȁs\tWyx49Yf-:mm$2J,q<e989F>wS'iݵciΧeӕbufWM{+_/ X,Ve'~ǃV{`[ŋegXK#`'}}u#ҡyܗiXTSOށ6N|YaNxb-wYUg/gPf{հ{aR<2݊Xts /s? =G1/˨}'3#oDC,mЫWTQRl߱UC .5MD yv"<1*)?'p\mJ?X,A$扡HN/#ES2V/2Ȅ5Y*6K`4,j h9qN\u9 zmZ+09?nt ykN?;-kcS#mR w*hŭd!-_qm6_ֆAP!S;x;Nr|ui{VEnCO݆'0ۋYgNPb~To%jwIK׌{3ž6}!=ގF#U2O+ 5?'` 7_\A!1`җٔoӨXh`GCh QW8g CWإu+6V? K]W  >fS7*y6DtΚO͟}Lfo*YƔ$depnrrLkeP!rn:\/f& <磾:.vP(;ѢLc}E:C,@,@ pK@SYHj>úzVG-Vo-tJ.%%37xlJ>eLc:3#spVz=W-|鋧]}ix!Kf*{6ڟ?=^KχloB;UN^l~!ȝM[zٕ֑+dhg!e?QJegsg~{v]Cxj^+[6 8,Tܭ_6B DsUCSN[>4 49N }|m}VW'ZK @cC(6@u*O@I }:icD˂* *lt*LEg9%ل%cy1 >Km̱,|SFEM1@FE\2wMk6c}Dg+6<[U]{xK>Qʱ#%A8vSbz;|&]>)3v1 AlgQO2OH,'+Iz ǖݛl.7]ntcrvƎaXM9 Pk[SK'tj*DJ GT8u4IZѩvD擯)1ҒY]dN`c$1KobIpS y'W.] ڥKY.)Қ I 9, o#ĄΙI"))5ZXyO^{w_'A:98g}_4'O m+;ճ^6W `6A"bb$J *I[\BjWy\}"Et|WHɸa]4KT9Y<Ƙs4y؎8L ɨ K5VX v5ӍJ*OC*5 ԑ]\\7im늆({gn:bX8qTy9#@8 3R4̾˖#I׋֞NFk<8k=jBe%eaiC^8'3i|hI=X}xYy}9|-r$rjg 5ېd6?KV>x`!GFB&12z Cq{)r7>3GcbAY0J#uR49)*cLˈA^CE >8u~-=g7bmЙoe&ٷɸsrֈ B]Tx/P:( 6l9V2 1l,Mk՜=~VZ3/R.fIZ«mL2BHu&p}%cs 7iĊ]Jsh|k zgBȤbh>jr4X9ڨ< H1 FDpYD jn\NP%뽤e Xb6A M|B*01Σ&,GMKԧH 7:;HOeʽ:VW`~6:' mη()9k >"I81Hfd@P^xU2W)N7pS/܍ 'bg*ۡ;66艶A- "re!_|la2Cx7E-7Fp<'4O\#M$Ҧ]n!zR}k죽6CvggIzvD1oPn.Τd~Ჽ,v_iL;8ћMR%]៭s"PV$GJDGBgR*"!_^^:?]-2_̾߷v|˫S~Ԕ!_ Q)& 3,1,F%Uf*'gDb&Rq.S;hԕLƾ+ԫrw\WȧVwn֢Cr~,q4dBSEN9F%tQLBA쀹0HDâ+@6߹[]<,zv]R]%'oQS=`Gw=ޑ;gSqy>lwEdV`V`+\]Xo &c,?UL="}BUQ/RC@,XH1LwW7H3 mi36v;6K/[2FA~cW ֝S;ZԩkbfOO+/wĶ̮ŹY} PWjxP4K11(DE=NqΰTiMYF N*z-FnC/^Bu]8m 7p>ÜZ^:d,-,DžRE0x7)|&(W|,LxfJ+b]̤L.t5#[[78|.710@GcM9O8~ ڨ"٨h%RW/Ÿ"-vS8<} N&cѷZ#k9oM.*ʴNĴZ'6+gUHև?jǦ!BRML1x".F)uՉl=qIW;ilS \f~ҋ 1젅Bt6)BN)渍!6іgVes(r~N;Fp<}EX)JrC 6*'#٧?JFe\1\?KqB gP1p)rmQ dcAktٗp@ $%dA!{)A9:M2?F<9X/RfIفB4TB:f%`"g&eUO"V}ruNC RSh 8r5.PKy"*R߾dYH&*xeʠ<:y bYؖ'\m[a$tvXmFK,*ȏ)YiS6ү&M2iӲ}TDZ0vVj]Yo#G+l%y nk]3шDSbDV],5)nUF?LzԬxͫF7aZoPz'iݡF Hcosʽޱ>4s,jC Ke\:NR4oV?'*.?_ o #ޒ]% u&O%=J9i7yw>FE0oOГޟ͟yeSǩ'$S'8)`AozOK~dwF*N<U&)qy"sK^ ?2#;>7ZJ.1'd.Ukf tGsR,yLiMK怜4Ldrk/|xR8 kSQlmeOOo`\x >]sФ9֚e[b&_rR/̜72ד%j\}vyWIJefuM0z_*G ts*HnvO4_8foLv_~'CCM#5Ŷ~(ٲts+p)[j_jzk <ݖjMt&MoQCES͚[c?.\sӌ]OOnV;=[hq**[8`[17:6-n7zMs v%ɣmĒI:˪T0P!7]fy䱻zc ǏXL8!P;/m7 sdN/}bv,O{G?Oש xH˒MmQfƸ7MEԼ|l=)O.V[Ւ&&-ҟ]coͷ/Ggozitzص}Psd(Yi ͬ]j3I)5'hj2j}7'?4 kpyyUC&mfwO}N` ϭ:[ s['n̲ .flo}iҤR" Fzd 'vO6xSk- +`#\*ymKc$8#"d͵Fmu* 8`CJ9$5} ,6V_H?>PӔđX]"c *BF%q@B[ t{Z٫CطHBkt2@zb:@@ʋ]>i068uyvmw _ih%H4]g>0[:թPH!HD%gYʅrdzt}SV~ݥj{x}z46)aJZea.} bt&s#ItknNԭ˟ƣwn VV\fVg˧Pxbi#֒'U0fygONF_]-[VC`!餹Ɇ@pVh8!c[uISJڳ';5GjuTwsmd[׼C );; MQah<CZvS>Y=wn(6N[RR * k|~sHa\M_Qt[gw淟rrƱCG{BGPMhzY) Tՠ QE}EH+d/ihOxce}R9AHV%\wT o0Ջ*ָXϾNc;Cz8 MAQ-ӄ$t\Aтc\+52gSq^nn4һ>7ͥ[YW7 қmwPoOPhuG ;ǖAZP ME>Uq_9iU?spYTK ˫w3W*ڀI&<|&B,΄v3 o(^$er>YsBqn@YCʋNY/YJCngSUkSQ1ʃD4w6+4Y LHV6c YtɟJm$ڛNr[jX$I9)#(z 0¤vd)d{Pc.ު!(B;Ɖ }ɜ@ ǤIfMX} lYZ=Hݒ¨tPRR7m!<`8=L$4 #ˣBF]9%(;hS>*{}pNE˭4F<*UtX 9efS0A֠V :?V2jof1A)Mಖ!$WV(9G59)VAmh'^H݆`LgA/f#ugO*@&@1ϴ#JZu!v^DV*NJ&z|(d+ԝ|E7y.pvJkٿ i⋲PQ2dE2K.8ʮ\ )D7@#{w hY")%Z"?2U#9-G'L>z;F'Xe؜u?l(ijԂl gAF+s4j+y" ,dA;zj-$+\q5*K1eƨw}[ytQ"L}jV{ߏO ;S2!Է(٫nE-f}sC}kGez.JTXOj}U2q1k81TQ)QPyTv0A`Q\&m:g>JLIB \2s%eV${HaJ(.GX<1;癿OIm~;6wK1f"X5>cwl?6_m#EKOà"p^lR-#҄Z~,7Mɐ_?7.̴%O [_;"/r.@xi*Cʒ[f  3N LܒE̵QFs|=Z3rMxW'dVi{b5xĔ?u~qxugQ`tY'7pY#YQM81R(%)BAR\AA16N@Vْg&,qL8]Yo#G+>SRvw 0Y?B#O5%(e`FVOHJ*TEɬ8٣nUyf-} +O[nj|r?"3N!=__V 3fϘc5˺:?2cLL5DE˫^]Jw-I*k*tR0M8LHJs#9$}+9$QĬZ/^\Vu!P딚eLɈxU)* 6 8#EF $렚+,X'ԳYhգ3>︓SS\1Ywbsu{OqD`8rKTM4{? ^ynsHH2ҳ Az'5T Q">X+ D9yf[1!nU}˒ɳܗDU*b?rp\f/|렌TëqvB/ܺhZ|1vzg$7JoVyj,RHh)u`a5>a|U4T*)_k<5uA˺Vruk7it{S;\~|X5/k3 ʥPbhA磎셺ͷv'/9uv,[woܹ_5G#Oګ65;f v|sԍ+(Ȧ.׉$LK?1W ~mrYm\ӯ/]ey'ֿ+rY92'}z1zi_7>=#?;ιn;+  K)yO]<~?>'/ mFOyh !0v :_mJmXyʢp)OVXXoghbœ@#%?£ nxo}_>H.nz7r܇o O BX.猚 C{5-r۸=#E.whD ]. cC*r֑M)T_u}tHHo[kM'6DB-ME _ygLDb<@:^Nأ##J#7b C1܎I&(REV'@ך!|2 ADiuOk!r_T$,|UC#A*Pt78SQt,pɃ0U*:T"E"RHxb 'A%}d|ann-,%(at\`xHTȝ "tJh&4BhSqV75]S;@\ޭvZseϋO5۶=;c-9 z (hdף?'84= AVP8MDaFS8@'+|EGZKg*(%$%Unµ!EJ*SILRm8u. +lr!(\Xύ$!\>1 3⩣FksN?"[E ]HS:\?\͇[UYW=~~Ci[)zWu<b bjF9AhG4wp ML!YjA 3= 5CM[oLd-rz 6\@gUEE06:%ҙ(H)))d( \@"2pT "`ꫨyxl}>1xCW1 ؆[p3w(Ç. > [EWA Ȃi :V OzQ&Zk V/< [*RJJd$cmU J?{Oq_i'>d$ ')2IiEnRْQ ЌȮUpybh u1Ĵs qI^N3w(!9ݗ^և_v> c;njFO6S1!Pzt(5eK#:8_UDN HEbk%32TW3^bh-&.y}_&?_,9aR/1",DD%i""&FQ4`ʘtkkzѮ3mkݠRkg6^_OA9իiP;{ӗf{:(Kp3".KIS~NX,-,xGI4 Gڠ$;Y'x紹n0wCE!s"-h uRz 6gx[b&]]f{(#̎00,B #'&~L&2. Uz!v(6Զ>AhlIڶM׍v]i厉˭w>M/ &ǃWk }y;:ybQ+ 3IZ~sl8j[ ۚuGRdܰAu%}¼3hw#޺2DU)bxzŃq]UiIXIt׾Ws3MWu(_5nx[:8ilqWF5!Z bJ Aa0&ǂYP("oa3Og7ӛcI3n4MGϫ:ؚ^wj]|BR:1ϼTP!KQ(¶ԈrZ.ؽ]a'(եje*P< 4HaTyB|3ml7.I+PR2KI1Rya+T4hjFH"I`^ǙeFH:vCe#ǔ?B?K?jKHqD)eyyGI(>( ^$Vʁ]l4 zF8c側(w NVZ|XŴD[n0ahְeDv$EC%Dh}* hQ\1 A*!lh26,՗f5I2 d NRA3$HdfgD yFGXڔQH;hn|?!DkVTJ.EtDaϹ^3楂 2{ƃAGkMjj頳:; SrgTjN(sA3Rr |Cpm'q8ZaGuyNyyGXw^VkȽPE$ZinD '3H/f)wЀC@=Ytvf- "hDM|dx1B=[#cL<Z3:A!<#2V6m|h C2uxz3IՙAs?hR&s7:+d9<$RGKib[wYC.*.´5iZ|.RjSGQjc M0&$d&#.|"}r嵗%Xu[cR;ĬWz ^ wE bhҶV!L}wh?_bSgoC7 T| b 0%۰ə »B l~^ G jW?Bۋ<AҾm0? n[Rg/ ڬW5 NzMc%0gBY ǖ\}U:L#Y2ce26¬smm~` e2s C='~_~ g 5 RF;y , XMQIM"s*l{Uax~LպN.Nң9vBXBTTT =&hKnf "(-u{UC]/&#)B:1dN5iXTqA(" (O z{(=VEd" ӌig']R+Vb *DBKJ[$MRI4sCA㫡,^y exp (HH(J+FRhS\@nZR-sk0\9\%`Cp嘅p@E q8Y(ſ| X,t4Hՠ5UerƲ^/|D]L/Yˢכ[ 4Q"=8vXV_4Y,y)lQknGݲo8ء[`ǮL}0IW &A?/S/yEɏĖfƗw@I~N63l9~8уs33QX}̷_0>Aü]cX`;#3?jq0N1I(;L*p.&v!ajU%}qOVYzק_5&Z ŜZ]+#~ΖqwMv#b<(~G9>y4>uyџLnlcrҿi?*5ARw~_~~j@ͬZjpN vRAF-}9jZ I: ~~}jK|1qWڂ ޮ}=5· 8&e7ށln e劒@Y.It={:zǃhX A p@~AA:/~͠)`΋Kbry3⷟*=a9f {跟^~#^M'ߥ‡ѕV*yؽ3>y]>hR#e(w3ulEMWwDfS/exPQ<#b56^' ?6;LKJD`AaWRilԒiCNg,؈Ygv>MXq# wJ¥ :5,1H;`)FxBT9D(ĥZ&;صp+]ٯ-:yUI:g*h+@9v7ۂXʼnlK*(ĔPd;C^l tP*ȃWK\,`Nqi&1ɽX2(%a ZtZU^onq=6S20ϙ&GsEO4{Bv olg1 I5c[#Ds̍K}h,Y& ꈷ⭵c]E|Րfx|kˊr~7dh"nݾ`%Ȉ"瘧k{>8 GĠh ҧD!ZSy(vQ>{L]B&?A(odDSJW 4JqlC,2^HQ=`T9F@;,DDꥦ`` Xy$RT8ݲtv[J0?}>&=F 7=͋-nWJNaU7>B(4>T>gƒhF1m4)Ep2f̶[fịUh4BkM̨3hQ9b0XʔYPDj;)VkNY_Yx4&'K_}d 5ٲ`WtX"tVv鶍BiK+5#dR J+E\8Yd9r|{,Q娩go%,Jo1,hEZ' ˈUAXDh2(tb0k5f,`Od1J!5[j]w6-NJ77):진fyvYLZUaw2ZM:i$בUR&7Zn麖»w=>3Y ]'qj ]6*iK+59iIǻ42yuY=lpy6fZвun7mw Lwyf|H_suk͝~NCu[:mă ]WϚJ\S|SjݟvC6՗,6Vn}&l{^RY͏:,W,Ukasiz̸@i- USt;$EڣGT3NbK_Vup^ʹ? EśG }cal7CR*gr(L1 0WZ hf W?,EǸ./8laTT1X9_ݝu):4X9" ћ9Abz} sz3~ͫRjb-5*Up}Q}'ϰe<,Ӣ53'Żj5,](\hC'_A~]O53Q|NVB @^r-# dRKJ10O~fi YNUgNIwuim)hem^DX3((dQ)5/o7!%Pܤr'e$Yצ5d*aUBYJ I+yF:H*0}moo0-t2sdL}wU/m^c(ɒ[MmR2JT9IqbH q#wkpT]K8>8GXHU|kZSu:\ۓ⻊W`ߦ;#>&uo^qghS.Yr~N+FbUSl[tfY}M7U>E~ cΫN'k$wm`"&+T(x(⛿_. Y0q‰ CGRKdJA3FA@ưQQ#~S,؜!2}p %VHZh1;Lh"G쥍 : ,9kin$E_:§|؋g7ac#jyؔTƱHDfR5XŪL|Ӣ,Лt:I_hoI`qg ~|3ҿߛ&7~ _ox~yS&Hvye \쭘wo#mg>wwN y65Sh'Zj ţVl+ҺA*H>G6yYH|o$OB^jO賵T#6Ad.FDޅ\}1Q(djuw}S(!+ITf6}JSB꾺ȑM 5G1+>ck5нF.9lsbTn-ҨJm^mMB2l97HQ'Qۺjʤ-jLT" #Sޫl83XCw4+mT-y'MxzOxxG-pϼ!aGTֹOԨg>]]oqVQb[ګgּ멸"(vV)SR PKXYݖ2  IPˢCffÿ@SX&2AZkm5TjU!6YJM*=k0+ϵ]ǻ/gSyX4=;z&v䴳l"7'}$}f˩,aʮlǰ %mE$H%%,WWoR64Q!_ nO5U|Z⤚ǯS+zm䉞nٴӵ:K3y,gta/'%vbX ZXMjZ2VSSgTHQcJʝ̩Es6 eWM%,-8C61yEaSAmu)3KN$5jzu"(qqmnmIs&C׷okraݣfӶ={'pU;p%zqvi^5\݋:ڹ+ЂTgZZ1vskۧ{++Ҋs+v\GVI4"*}6c*mt W۳9AkG`ϥ2 *ݞr¯XZI*LLʤ0<OjBO]3jc_݆ OfB`^J^ 4$/ s ɔXV [VŖg݅ؗ|u5w&x^v ۬^mjxga&Ou_vK2r%a۸K^fB~ЇW?+4}pa%)F|GM੧:?uѵs_w+8i5%-܂m0kImFVdn7-ftl53{5fbԫ68- 18g18o(} cX؞}.02>$=<ᄊ]o{6Y^_R,BS%;'{-3ңjPX-q&ƱhX<䎎qŇ)հ1nr͓}Lqx d5Ni: ;2t$@y> z.r}/'%vq2!͆A @ǀzRS Huvx,PHne kȓM7 \1Y_iW ֙B@TuZDxaQؕum0Tb;/7%olpRz;)-\y髵kF%n2?|-˂Mr}E nTn(WBMVDj`g,->GE"uetIbbh\ffR_{$!jMQ Rر41dK*MqE), ɂ?"9ƨAX0gֆ'g͆'&l"?@nT+/*aYc.L/s դ2BQ:Z!_^zV$3ΖP&cIo^x)6S+^{ՌjA'~6'" FҥT1 [`5A䣷_DWM8dȶ ڶPp:̀1$|PLESdB!Rβ<FaeG:lb."cX4,qdtl[BqEIBWgI@f'.kGj"00mlַ,)8!ՐIh"%WS1vAQk}Vm g ֌)}r7wU`h>ؕ指HS/B|P(/+}JnTmS3O6.ޭ47gHI2òFd{sq%l1)KmwricQgR ]0Q($DEG9.q(pvөU-tImLiwc̻ݝy||qGSz#Sg>ƝyaaRmܢ F]IB[# sKI {s7povߓ=IӃxUq nBV }al(kt5 )\?s>伍Su?םF :Y #(@.ZbHieT=QgIAw5\YzvYX-˱Y3 Y)= x;4m7~=kO}1۟ןZ1_^5`tY7hSH2꺀$%bheW ]rZG7YgH>Z#/ca-o63_ah7~m8qIp72z9;CÛZ`afgo>۴b|N+֬TLp|nWnyGϬ>ʣ<:XԇI?'m4)E[ul֞`d@2IuNkTHׯW!^-"Y~UzG=oiݒG?<}}Lڝj[ eW]%S}T$t1}6 4FS6C-a|k1s> U:w`'T{!T0 :;J! Aï:#f>2&d'qM5p?a}QL$J;매i3Ұg۳J-HN>/ O|@/iwUQ[x<>&~ptebK^w0*5AR{w31;")~% c܀OH :j672g\NſP'H헟B|?[pxk̈́ Cv] 8X26ò\?`=i+Je9ӝ_xQI^7|R Żj3E^9@6J3tY]` .Os b|}7O-6V=cFB1Bs =Z=p?}CBڍ^ܢyX#i٪|cѸhTaP]ݿ{lPȃM=Rm9kFf|3v;LKJD`AaWRilԒiCNg,؈Yv7%nYS<9H5K ( uj2XbBv Rzv0RGP85L&rMvRjh♴q~eD?[t[~$"ž("L Y;{j[si>Rc]]Ik#>"@h]zpJBj|\- C!fH ^ wE%K0UL Shؖ0᯻d E:à ?F2@^a}sס& d3= Kj><6+@*9cۖs+zK`19G_9 K*Y0R`D$4Ŏ;ZeXST#&u\E9bj\jPc]dIiEl͘ =MkL\c1ɇa-3Djd 7jvԍ}irK6_Xr#hu#]ykFu5.&os/i4Gglб[ψy5ֽvޛy68Gs#7Cr7t{Cwy4vK\ly|čtx.=sސLْtӡ&m/kG?|ct\9ϭ͟7Q[߼u+⇷o_uE*PwsTDYgrSIƻI_A/bl?16˻]jDU[b>Aw=@$U> dXxS,_Vɶ"7|\ WSx}rFR98I1տ>jd7$Uzǁ~r> X,hmW lg ]I%( a[jĈ\9]"j߃OvGxv5-ZYĨKA`0 RǙ6Z @t,ya`"ZAPhB$ Lc,-g!)fr_gg\TEe)o 5\= }35{Rk⣓R|P)$ύ Hc$V +e Z6A^SC=#FQXqn6-g!í3G֠)7V1- eفp5l7GY̠h$0rO?LdDԈ H!UR aEF| jm TTănSCHqo!3zP*eڱ 1Oh9YsFP BWXQ)=>E1/e3:"Xk:S+&O"~Z*"QQA5XtԹ`!S6 +q'8*ݙ&I&K٦#D+mԚHăc%֌#2NkUAq'"[*RSal$XaA's*1SU1"j8v i* r ĈEأ5ڨSlXCCлqRF>rؕ!צ ]ӄ1DȟVVE:ƕ̈́gk"s_*Mbɐ&HЖ+ޅd o8lH ;օB0s "2C yes '1)F)E{P0T+6`JM` :Q!(JH1I ~onN; =Dކ|DX#&N#åDDd|jNuX`9E΃'Zc)H0F G[@E@-CN0m:Ŕ){w[a.wiS RYt%4a*D ޸qQ2W[3;&櫺]niS1*(%F \d֧#5T:8V܁b^U9 (Tl^tN}VX[mՇAqS2S-=7z77AӚNk)9TGY&^hEE% (j~H  ]yQ,ʕ=#Ly,dhK-# ąI?Me4ˑR\aѵ!v4Li50r&yיmب+Zܘݰ17ﷆ,xsjn'Ÿ4d_Ȕ0EIȖXRZ!]5}:Ecuŏ)*0E$s*<O8(0jD3IpR9&){Qms[ "1"REb_"D4n6 .޲z.i1P6TX~_S:J#5櫁BJsǞ2seKQML2zR/PjL))藈=F4tF mO9-;͎Ҹ~^]%ܢb{o/5 }Fk_ 83eCa4&?]uy|O _벎uYJXOg7w,WzUhۆ+gʡߌM:ϕadY7ߜ$e&)Ast1azUnlkuCd78N#/q çc%Ϻ5|uČ1őbcl 扊U{P;InA*5ܳL6K< Z]/n&ISrKG{\í7UZvt|`N "QW[p-t+a:x`x5S163qM&Mˎ SwD1dN^_4N+ dAP8@2 > ʏuQ @ȉzrf݅-B0a%XIU 0w$"RTH@i a#dkCWR\39.P(0BQV!iBHy1ǜppI8sCN/5X8fa`JTq8Y(ş5WuRͬq:IB~ 9?/ІyMv?s7NgaRw50J[G;M _S՝Œ\1Ez??s7̫Uӈ%~6x$ۚbISv*reW]%S}T$t1}6 4FS6C-a|k1s> U:w`'T{!T0 :;J! Aï:#f>2&wm#Ifpä/7[,0= 38悠EHvdI=mS%; D")Y]UGW-Yr8mQܯc~DvcgU>]լgg.?]Z=T7%eXtuo"jU]4˟[;Q9{ʻM_??zW ӫ8y}7^tpP2(}q?,\UNO?g)i/xr[s9ר?YZ4Ϻ=oF!~yMajIx|xI[*g g:q6 bmECp~kUݳƅ>VVg4j:f_Y]5˅1mwF|b}LWzP#W釫0eX @zС߱=7y?Z^vyĕZSVܺ_#&l|REɴ=Ylt_o~&Pw;+tCQG"(b?v&vWđ#(GRa"H Fxc1kc %¨$ոu#Nчsj&&PDjչ%RE'ƣJ20\*TbRU>LV$LvRjx♴HytX)$3Ž&0nO0(AMCm6%"weAdz"/(~#0n}\!Wm㢚^<T,L^Q"3%Fe {>B|̅<^_a%ؚԜs ؐc&4T_|,? |'#-~r^)ԱY˵pAb"&3&H|2 Yݽ7vk&8.#|}srNkt5 bc{C] 5L!6˪Z59"7னoza"v r{3|._PhmKq@t/#Hn҅mؙŁkw"bkG ~j<ͦ8'&坃TkՎ"Tx+c\k\ "8O`Q]3LG+S.2WEWOo p' 9-dSeG/('S*:Wx}bk υrltm05 ?!$$1}eP'NƧUTrༀB9/Kvm\^!ڔ}V P,[Jd&5*8+*, PnI$QOZ[Dxgd?|mz<^Zlotbbw޾9}ZpQܾd\#f]`ף X8J$9Z1Kj5>5Ɂ^{ @N= >xQȊ;X)JJD Ι Z#D[&酎6dK!_՚%*<p(~#Uרtev.< Bsܲ9}-{[6.!hZGCaIQZk kGj T&,P+vIp6RSer5ӯjysHAQ|m<3&֐2q!]"r%;KDK2H*8JD9m+%X+t@_rc4y%E@ӪAbUEbhe.x뜡DBnDy &v5I,lPΎ.V7?80WG_d-Wup-hM{حն{fonl枳.wtk8{]}a'm-?K)/*]wkI[v{ˋKt*yy;<лU[[n<3oת}7T[Z߮m;4Qz TԊ;tRP&cy5!8,++"@s vQIETt󵏛M0R c2JXg$x:aQGQ΅IZyL%RYɴR gb <0b+Jf'_+=e% PRj"SANܙ!LD$4BeOYe gzko᳾aSAz3V B*h疉|}~qh~{<-v:yÌ6"q\ ~UPR3 K6IJkw}SX/RVJd"hésQXac,SBPƹIB2 qlQ9/SRJ*61e~&sxC&@Mv ^<'LTB’z4P@PB$^DBU3TT֖[^z{F2q" 85I*ʃ ༆(G=BO[EL2%^)iOZ =p+W)+O)wL/ Z/oj3pxW[{o?lTySVʋjťE0rYٞjQ涗Zxyًj<{M!"۳ G: "׋;狿yqz˽ՋXߛNO/K& Fn*GȍI7[_u[_j;H] ԓQᝢ>&KL٦[ R8KRkNL4|RQko}x.Rzϣ _7Hh $`kWJx܅T?LrˬRDeKy3{LX&fR3 !GʋgF6tZ2㤞U["E2r3^X^dcM &JP ķ0~rV@m+ǟe37H=*,4%67VE cC'Qe۱dJ|C@=W]u^ltαZ4sJ;*\ƠAq1ʹJ֢`(ΈtZ|\5QOWS1.AS[䔔 . HZ[ Gb_X0׫edl}Nw1d P Π>2)2MMimqⳲH%ּK\c5I/+]{XD0S/@SM- tș;t %e&[rGcr"C#A),}v)ݬc8tzӂ#.'ii2Vz8>? hO_6bv`ѬujP\p43C-ώ+À9=0"ALjM<(L&[`j Umm"X3 ZX;nwmH{Xt£x0 ̛ i e?G`[-Yݒl,iJ$5fuU"H~^T=Aw!Z? ߯OϏ12̒,監n Kb2< VO9FcPJRhqs))odA7-)Jaٲ7Sal;%+ݵd(E-6wS}Dtjn7-3Ǯ=__8g̞1ǘkuϡ-|3 38'jZ^UL-$BUmPR]>@jXHv! O7wAEz)wLRnm tmswCzFft%Ѩe5dީx6{9\N7 =3{Rj2AM%΢3jwcަ2e +qm\77x4t uDG_OE_uh]mo)5=cގ6N:kX3һ )̂5:2|X.L)f:ȪӀ gɚOWחR3xaMG/:嚥P^ <ԞX,.*!Y,Bqejѧ{p|S}:g .#'DH2<> k !bA:4ݞdƗtJR :{m xd 0"ru <8m$zuCIBǔ?v?0[6r?<#meغ)Wˇ;*JQdr!2f m MLHZ6cYFn#vnU{dECT[@D%  nxpE2%S I v]-lm>22{(F{ų} WO=A}pm4+VxA_Kln[Rg/)W+iiq#-5v=%6\s׌w^gF!LN`g* :s/}*v4*R2wd⃝M]_1ҳ.oS%鄲)‹;/\=I3yo̠2DШQ蜕 k~->A Z+C3ƷW|ѰejS7Ő4٫qv>~xmH"$c 8TI@J!F5/h3$p&["C@ekB]J\%8b,])U1 $絏y"tj8;X1 eY]}TrAkLbL\xFOz |Zdust.l˭@]/mYщ9@0p-֔1BR0gZ9=k*?ˈHi3.7\rk|);w^2-zf\ƚ(#|JlJ??6=j\&'I!peM,5N[ɣECf4F^,H솖e m2ڿZoCCϝ'ߛ,ŽgIOOKU!4wG^A2AK柞R+__ 4F 4mGR  ?48-pj0Wϳ&gLeHdPa.-xpDz#c i?ҤLcFFޑh:ʻrAãPz.ßiUkؓ;NUE@u^.Kr'˴:\lO|uOtz&UMMX ft'yf%إ?/n?룬RsQwy3++PvEKiYuu?.p졅ٔһ1v'n}1o/Hl*zU6 WM*+cV,iJL9fˣoדO'3w6lUz&iąGD4]Rӳxq;n`"@>E({'xGm;S9d"W΀oV~uTYEΦN*59\jG}w6O|IEkJ6,Vْ9XVwM~!osH!$M%&kI1U <ؐRg&|x&q0qgRxZLKǘ 0Hq: }& my9hL&MvT:7I)βWsK0}HG&"LpNdo'AWV7"',ӶUGs.I'=FJL'{Z dNe AŒ$+AԂ42/t ~ܪ1$'GO]Y qetT|YaIq UXLN*\(&v&0[Olt 6xW>=Тyz%r C O[_ؕ޾9}S͹ g#Xc_>t~18,rRBec0OR2X&eoXyԗzly9i()$[gM63ǡ4 |=iBgR\Ѡ3>YLfA8?v6}?BEFoDŽ#Qg4ǮfHCfmv*&<9 ٔbvk΍@iIf͈0}Vd)"n:ACf#n*{ꄜkD]lBu}> lNSjh鈩r`]hl89Ar&W-aq E8^?ٶ-6EmYn+z@% iϥp~=gUx3zE &T"TV1UP+uJ[{uo9rGa9@lLAB+A}~u6i/xFy$w\JZea#ab\k29r JRycpvgMx&Hᖸ񰤟wo"гvWSr}i%t-YRU5Fn»w}qyUٚן.:ȍCljuXu&O.~w4 z\0^kZWԺ~Һ=MxE6f!ePn{=5ytCK-7C1_ݼs̻dals)"[TݻEO`:"4N%\oPTuנd$އ69qk# A(QC`ST.Zg<ܡ>fځȸ9pvH8&H6U93\i˹Zo tuIy~Vأ#TkiRTWQ Q$E}EH+u X6dὡ00:pVIBW+- ~<:.On/е]#+л >zy ,=-sqzܨoYJ=u1 s*.*Ǵ4pg%Hg{ٟ8۶7&SބX?͓Y-[B#<}?('Vֿmo0_t ϭ_eW='.!kmNI.8x\շo?Th9 $UcA(D# WQȕ\ sFۈb`j51EƐT+^D!'1B*i-^Eo ⮆E$-*w~j ?{WF0fp _d@{7`0{ؽ0|Q$+vKd[/)KεjɪSbU5>9FMy]f}g >3;0m}MLl-cV F+x NQ1fYш1K.HIܲ"1S&R qeRR5c5rvk($g ue]{]xT]cav]RPw>2 .O~נqfx5}{/ ;p>&&<řs H(&hT1%,{FSJh`MK6f6%ZS& N-rvkl?\v58TkZZG5TUVڸ@Wm 2R$,AD&LJrF"Udd᣺l~.^z[=aBI6ke4 r$}@H:indP`3BPC6KZ/iǔ-aG_ K-fV:MQ`W  S(IL}Ayxas1#xęǍDQ#cU]ϙHQ1/v>6af#[ n!qNq Dg*btZs'l eX!XU7]>HB]J\%8mСR%PXe*;gmiFׅFv.- ˲^6+*˵ Z{2ic*ENE.t~:kV ,iq%}4J#Z5wqѮVмHge]RڳB^\,%5_]FnB߻m[';,N}E컫f}|\v4[oC^C2AQKo=e&n[x@)yl)VL%il%-D$ .15YN3+be4jo܁ E8_ϷM2 epqmǣ򖸟4q0Ol=z)ʧ/.^>|\ˈݐ\Xu)P*}4*|SC _潼x4ysw^v?qܾhCֻV]0/R5fW|;߼vRJimwEYb߭(L˪\_QL߰~)xѻ1vߞr߷q}7~46"VYuﭲ`*ӢIR8*G-o{m;s9/|LU+h 'wQeMU.hǽttm/S "ajTD컍 7'"oa rqf5jcЪ5` )|flVЛuiEsrIe<-MKǘ l t2(MyɪMܖޖP8xS>iC ݇$}D؉" .Ӭ&R!7nZ S>WzY+/9ԧ7+݉ Vhuif ZWɗ.-}p{TbPHrýi& 'p}jc3O j.!=QBz<@eH1;b*i+NFx4I,#U-gsX;9_*XR#̽*:#8PځV [9{9?02 A<~}JM7G M~W)+dHN6^ h΄ߒUOqFZIM#A *ri1+ •O+2 E4b2MVZdVTEj@:&ey,t[BR(J pyn&e4VЙ >ɡL#Ӂ#g+j>.4d~rN8 Uci~qriNamUv_xh!hf>' | Ôbvk΍If͈1}V2h41*S49d6WU%8XGNsm.Ip >q1qcPAKGBs%9%Ar&,o|b/-6Emu][',.5KBOϟ*X1Dà 6V@W,[}3Y,,G+#(7!9S,JJm"48_d]f%&(BUq GG*Nkx|-о6]xW׫}pKVҦ%H|t./ҖW+"\0hZ7Ժits4egl_H-j(SkM76%iǛWZnbm~vJ;^<]%͝QoIdMM2u6i4Qo0Jklsz ѫ:YoR6m%**c s c7C K.{C|kw㑗R*/ ϋӥr80?̆u_DsMQ'U) D$UZBKDTقU] XYk的"ېTFI+^~#t( <ɍRIDګa*J[D夽bY]79'[Iء,WTy Ӧ-Y2{2)[m*#QŔ $472ad(0A5dlM/ivLI{v"^X4ֶ>HDAZւ`հbo<L('z_ ~N$2pQG%J23 &G0-2wݠ9e|/1?ކEJi+Fkl !6L_}H#.5.V!kpnGmCWKw 'ݎyK1r,)̮'i'RY Cj{m_I'KRK"9`ur7{yjX$r*,7Cn 0xm*MxI2vr-;ܩ 0νsqҦe[2F=qk Hk4Q8g^Le'^AзHm_jؐT@S4L5>kH+d$}\s鉙S9-R,2 LQx!kZNV4QHBJR.[ܙ{f2lB[ ZG(To>Km{f;PN@A%,Tu݈ =H: vF=뚾܊җpK,zǛ ٺa+!Չ e !n'c}=:0Fz^rIQS*D4a*()6BL$9sYF˸VZ42g=gm丯qmnmޓ־sD]#srY{~.Ho>m(ozzNP/!}0$5IGw1&*%@ G]"h㸾O_Lh4 .kΔIdAg"ZɽO)R˜Jq!rAZ"Jrc3p]sT,p-",7Om̵GV#g*ifMC%siSrvCމ;y|D%'Ic!5@ZMe WvL%v îg]uLSLޢg G٦#E!d\vwm=J~K]5~=:<4g^F:Ҩi!4}>@48$zBx\b+VQcr u -?BۋQp,Y)6ƛVU gu"CY^I-\ $ SK[m\ժ[^~-do;H1,5x6 yX(E!" ]-[CI.6 FL<5evuOa<šGWd-G!W&G ŅȲMƠ`MGPs5jXXiYE@2`*6Tge]J"Jxr.86-qWB[r9 <(Yt,ܲZRŘy)fD`+'(hlEPE;A'^_ - 3fdGj23bneK!{WA3`!W**`b Fd'Mv x@$T-e \M0h,raԘցC NuYO燒*ۚxdT}Sx:̀Q!&Xw$|UTe ftV9 m<"2E\ ]y,:g7828ۨtZ N.͵x#g 6( Z7dt[!EG,ƿE'yS)'&âM~؉NDŝ`&e &,a;@"TPŪpqp IlGSHH-H&;_LA W;hj-MI T#NrURz] г Uͥ)aI2QMO݆EmV/|vzWLqy ;!0+bȠָB-8}(JY,FE2(R1*[F@X\.F-40fϺfVzte)!!8R E4! ( \*v Es݀!r[+TV"tV,rbB3^4\M(65%+2kC,tX|4 cht'mtH.rAc@䭬Z RK(l} 5adYU'.Jj%&e}cl8&w:w hqQ'iߏ_v՞r 8fB',RQsXSMV@Xɦ*r5zOEWu1ŰEmʽW'M>rZ.@+se:LBjjz]K%]?vۻ};/<.)'Ӹ4'̭ ?81@b74P TbfPU~G' \NR=$2oM[66,H26gtKq{{͈VͽK>7]-󋓫%֙ˆ;RF'g ZڼaΤM[Tz|ESۚ.~yJuDv2Dz̭th=9~^\7(Eͧ݁㤃o RUԈ)(ѐV]VNBړ6>Y;#C!g(@Ů;<{v 5H WGf#S*3vM/F=Kz='^[M;;ex硼?bRlq h xyUz\ ³7=[fm[93Ftݍ\}!F\W̤&օLXhp4\jn>~Ǿu0"ecp;è1V#G˂[ỠaUdɾqEwSHDG,#ʽs(fgCr} cZ j4D>*5Dy N)ȟf(_GkMD(%lC ŹTop~P+5xd;64.C3tO)mv߿kHi [2KoNŒm+ +IGpb[b5R k:t((pEJP-A{J;T5ޕ g3Ys1]ֱ7^,tMd#6²\cWL~,K=oYwWg_zL`_Ehwd9r|ەrde Ba:D19BĺY*.]a]y/%RIpƜB6Uem:T!60Fqygli9^Wsxy|v2¢2'`櫧6Spn8 q}cq:acE /솮Zxg痋>Gˑō@ rЮ qi C~Rmoy1znvVFg)v4qPnC3[]|hXbilWC7t]%o(\}Umn{JN&9lk0oYnm')fǶZ$OƼEdLM)Oŏ_>Q|E?v-_|oz#X{='3#'_8~B-2i6O˰̇'NOg_NO旯\b !L$l[gWScFQs)T6'4e- 2 (+M-U=E\ p*#59{ gĎ|˸<mu)G衺־)֊!X"VLm0JlNQ9A2c>"q E{Z8 f`ڞhU) g2Ųoq>_"߁i٣H8TSVA4 }b'dq/3_rGE.cX QKEc{1wImPnfJ^}>  ܎a]JeO$/|qGlȦL? ẻ~Ϳfn̿˅ 7tW>ucu6p[jZJ'J'J|ѝQ*iYѩeZ{ko#@M<ZZ6<= uxY(6ϿLɇ^ ]X/]uJ/g#SIS#SA1Tl kfVڡe&okxy{kܻ5|fwB^_Gٮw [Nm~hmS|6x "f2It;Ԑ`p ES-?wd0XCuZE,q6pXe:pa&:ƂC)} jc-d9*bl{:\A ɨf"Jx畭WuU J 5y//2 ΁&yL>&'䩿7Q 鑝rv`*)V`FJDv]OhWgfFLG1#ST`S$bd(PCFg{8W :HY_~TKiRˇlpg)qHJY5lzZX1RXguQZ A!D^T`=T [4a)^ /Ҝ ~3r:3]d֝}&hła(ѦV_xkfw.i.?ޒ+{A(tȻ=@%ÝFPH&A9-w< .D%3-|̯O{9y^HEZ\:oՄE/(evh)%cez!]hrHr\3Δ%PsQI$& f-*$R8qw,gsUrW fNw1*^thD-1K`D 3)p!9l  P"Tt)W_ 7=&)f*fh2sDrQ&b ZA0~R'k1{)Vrc6)׉ҢPS9κZ+N cOL'v!MYcO2$WNGMe\TVmuAS_W)r25{LWCBN UDN'˂EGFjǙJE4Ϟۼǩv%'!'qQa(c$rq 89"0h>jKS$}9<ƻ$EpI!e@5JH' 0%<2r R><}~C@F(PH M|CrB겎ْc$*qR(@F 荻\?ÌNΡCtIFjo^:.*,Wcq1WނdLENII@9fSF/E^#)|IC^e}mh+vpxJVy~EsV:\lh,/x+MK'U%3_zYI.8Jۗ;xL*]/]~z ORɵRLC%$~/x,::H !Dx\/Z+xuH;^ΐ3/Q}}4 'L%ҷ<1ŠwB'Wr VkĘ49W  Ac\Rs/FJ8xǼ{ﺞYn\"taT&Y!V5}u%fw75o/b1R@o:yڊa˺+<*% i!ĩ\Qf*Q p&tTs.NXܶ^_1K }IM.^ 9Y^4ve6b;W-0k^nRnАH,Qk$gJ:RT uu!7r[@#uyf[NZj\g"Z6e<2CNf\\V- f/'<^B[,;^LTiZXPՅɦtQ2]O 6";],Mf潖;&-w+kmroă-lbTݦ\Ξ#hszӱx癛mnKn3OLRsQe.r576̳:y oĜ2̪Gd8r;:ZnҚDtNzP҆nS7(_;O'N:wzE3 N׆itU ѹ @Ae-~'(<զpL*#-z{kʩ>HN(J3ω\0Pc=6:Q"|=ga+^x5\\yA8J+)2Fc lgQܬ! R0ggkdRN/x#"^K%"EC )wqL?qNYz ,F az#Jm W4[؛sE{4A Su,1ݙXYQ)2-ZmxnugxMVLilPѰj*v~VSPc 4`'> oXv>\[(! ~,̩u/u>mZUv08E턩]GSzng\X8(8jTZ+OV 븵4aTKb7qqJ?ΑgYH^3=.չ#QM'6Jq*2}itb|`6Y'dGO|%8?dD3U Hg=SFDed|1N0̖#5hYa+ʃ|\UT`SJ)?CfO[iuC2+eB@sJ>K,h`WFEL1Vz(9ofj2r en/.+ǿXxE^bG sl G*(6IM涡o[zm>~?| Ln_ wjloyu=M*_k`^= mڍm iY&]NVrp6^Tk\I:ULƩL%2"bt6$!T͑WGW妣DH!P :PFhԚpY`۰M9KtS~`QUzetGᄌ>pJF *:uF!@+|LŲVs+vy^L.͗]O>kr^1?Z&:U-clWb_dYP¼ڞ$qcB]Nu*jIW܅T7̵]\bS+4* AP/I=zNlA@YK.Q Bs,)--(`ܾDekAg[./&Ә2/5J O_o&7?ٯSm &, ~;Ag4Gq~GQkK- ?WZ]:NJ>|ޠ4;yPltbG0ޙY-*۟0$e",d"\>-Q)M -=zUVaCE#G랡Kl o.v&H)= Wmi鋷Cx/$e ѕV o5(_UwEAs|8^v\ⲺƳxy E`)hPEVTMiidT㚟`}'jgNYNĮudD6&8g$$SPx+L8u\%c8n<^C6we5zJ.N꿚_z;iIp%G!&+*UMu W+/# *-#Fi\~`&zmw^PD59E5pN\$\&B, IT.<5mM%8HI~:[PŒP]Rr(3xC-@U>\ Kp%)RYLRSk\ܗÇ|u/$eq$-"Kd*G$%A + ]koG+?%إ]78cȤW쟟sIIn=Nll޾uT)5!zIF cM"1])'̀f :bh!SW{pCDKFVGYV1+!hʆ^4'+QѤ*CdC #yLt$Kihdթ|/d3;[_:GBFÄ.PFl|d~N~nSiYRX'Ib3E=uk!Q1!KuXLY)dWh! Q SM׆RDh *[ݫ N6` B`:^vNq2Qh?i QLn!Vˮ+eEϢe&6r ]c":i5 dCFizz?s%n`\F єYnXkY BVЖ_;wMA[%l&J3EQOilH&O۪nWH+5+ mCU + 6V42Ni?h2in vEm%rxeY z3xWh%WkP1h a6) % ED&P4v#]7cEfJW5gQAGE΢I nB?% 17SL3H L tfM0(ƍut&t/Yi{9pgBdi7j,X BI=k(hl NՆ85+mv2Z".5=)()(}Juy}WB¿&伖h BD:|J)MZomXE񮀵Ck>84iȀgm! Ώ'u[kw^̈KQEn"#ƘP͋B$%jBDEP}3*عa܄.hc P4+&{@\!iRUFB1L8fփ|IΈ A9JIV :*2Mt(8:]7xѴu{GP O*@x nGgp6"YԏDtjbd-ŀb$Sƪ`|9{۝W.O&Pe}4 \{#BK~t)}ю?bRnF%| \:$ճR"T2kEx(%'6ܖS5Y$aNJ@AK^Bk3h NmB[v#hXp3PIȚB\Ƣ(ΤE`&#E4Q]  VQ;#2Ϊ ת ¤0,,H1#dlDh! c(jS,КϽ7tXI3zli`F+޲[PRS[ u/G|X@h& %l@M}oAK0A Cjh\7csym`tY/t(˴ttf\G$Y:jT0ufU:9jѓХE0I0;{'QEŨm[SQ֚Br$ZVO ޮ `L L9:"OGFo9)̘B65b,Hד&%\IҁC AP`AH Ш3ES'2(X 7m#2lbQ*'[W6b]%r,ZDyŰ ap`R : FjQDFb1;$zޡNi,+~(] ڈJ5(ZS{NŒ#5ZXv=ؤ=ɗ U5e2 `2R 9'kuZO;z*DES&ZxZScUU\@`(IVTFY6Z4`@"'ZiDF;̨r%#p{*xMWPOP,iJ .IPnDqXoCnҘM5Qr7tid" J.YxY 4iU.dK()q$d LBFt6b hMKGE'&Հnmt~+n^Nt(9F=*MPc,*Jgǃϋ;H=VD1`MЁzoO/$ȃ@󿩲.!<>}@=X{`\u@2Z}:X: u@bX: u@bX: u@bX: u@bX: u@bX: u@Ru@Qit@.F`t@Q>HZ}: тu@bX: u@bX: u@bX: u@bX: u@bX: u@bX:/Th><  2>CF~::/QdҎu@bX: u@bX: u@bX: u@bX: u@bX: u@bX:/U2!,}8: Ⱥ?u1(}: /bX: u@bX: u@bX: u@bX: u@bX: u@bX: }):+륰ƍ^=qI[M驇pkv;7ܥ캟Pl pdK^Kal *˖`,[dK>zx@*z)W0DPWYF\Uݭjĕ_IC|Z}wǯIAnU>9㰚5@uWGh>@ 17)9eieBSgbxy:MxlK+m, 稚lv،n`{C-=~/\57/X.oƋ>xaPI56߼P) E۳cԌ'njK:#/Z]|\ [9j8=_ҽ]zX0r6v<靿ّ-1^~MCY&|['-^ބύd@gcg!^{Yf=*dN.<;>Pٿ ~89?PX f*KROe߷K9/o`s=eqfv6Vf//GכV~e}Z,OׯƤ{Ovd\ oտ_FT=o|u mlRi')`mPzecMwlMv&iP<'6e{Ħt>zxGKMMCw=[.A/l?kbu3 =mo{K/|Ns%2GkeW=q.NRN_As#ntxӓapt滻M˹(],eGtutttV Qd\ eLrSoG6? C@!pz2σ-z:;g\Rs'Cz\U5zړ- XLȈ%$ hZm {h$E^hf'\rAbHED][tA4Niib//O]Fƌ3 o:64#gI˾:ygR%-hZ$05MPټtL󒎎8:K y0~ mT(C=:}vNS#6zw6;CmzN^[7+i~ k+ݾf>.j?);[|fy{ HfxkFeO _kb SM_ kw{p]|?7 ~y sv|:;o%;&[ v׵awIݬY~ ß턜yˌ==^ovzow+xhHE} ?qjGGON߾{r>JsO˭z7T㧇{yf>[^m0݇^Y;yzӗO]Vpڗߵُ?<,\q_^I_}}*[cn6 Z;C7.]N ھz;s~:#.`|1h2z!^o}j9{ythxƱ}#߮5,O'~v+u±4YX,Zw| TN^r/q/\FCzIA><߹z^BcKw_u~HftV)rIZ%M;&z+ALx:Rڒ{&&I\Z՘P[5L;rSqCs.c &KK|[3秷.>r,SIdB$DN#mUk:8ڽY{(2$dvz;ڷ:u3YL\Q7Wֿ߾;ndږV?t[{{#6$OM  ExdFn}_'I,{òd| 5??zdq:V CjaR:|ߍA<îCuVG !Zˢu^Za唧*/V-0*r+~f=Z<+)|i\E*c =*8pQսhO_=_w;iCr1K/xzv!=|N^< ^zc(CNlO_}-k$:>/Ck_-k= /_'9/M0O7M~n{mz}?݀h؟l_v/W@6=ۿU˃ fwr;_mַTBJ'L.ҿ&dz_7]˂01{ok -pF; ͛툴N4_6`@L!_т-N3Dlu\:;% EZ舼lAlun:YYE^ez< =U|?h psw7G|u Jh"%cYM0P_VS_kNxɦ(]wIv˧WCct+$6/ePf 0%aL]cRc2!I?q$7 ~a7/zE5t/v PV=Vi2^c+cRLfhZݜ#Ks%iFU)@jPDAb BTcQ}-A95&koe,݆[Y35[Gdl^121Km=zd?:0ǜS%AIrV699ڑ99ڳ`d2جstD( !hL/fUZ㋍Z0iZӕJX>zE#(s 1(JWUejV2(40Z; gOvkHy!Jīi5rwk;lxm[pR>8ooC E(v1lyz ')]^{[6ouCܻ E-]^?< jxzxZ,Zos:C:䖂[jݿmW|Gk-F6ݨY4 'b/˓W}zKzU}s"m8uMq~^toakm' 9{~ !%xy#6˸VhA@- x,HGӺʀ>JmN6JCdYL|.sjXSz-7ƉGA\v֑1Sy-4}z|Kܣ3㈎N 'H^6h@q;I< i8ǬCH\vcOU?AjwlV:BA*wLy#{7Yp/Yh>,em0f)Yyx4O;#bP ^vXKV($13B1.n+ gHb! Ye ͓HraU!T=T 25OW G9?ʔs 6 ] 1]N:)ciVP/O,ܤU'gp!ȈZ (j"M^nqc~A.;ukVɡ$4ZLҌR\`ɨ  tu1J! %ˊ4UP`2ȜTuig4wn Z<$̽zlq Z$a1feQ JA_~>nA6ջ"3(4P 9C 3>Z ґd,R KIi{stN;pvh 켅2~ۻ+~zTl?w(/ 0̴Dx%0@*T.,iTrTj3`Mbf՜kyXs5JWU}rR9+ܕl5A)PYI֌݆[3nG)'хVơvօvԅGՅh yM15Oː'@~rXc5!"uGmMI3zg\JY ubYMBnjX?D |Kme'=U Tx}s^٭4sVǡZun#Z32X\J F_S$˞!E:@V|axUE3=UV.$;̐*`LqkX"d8XN+fTzr6p 4M1F>ՈFF5VF* /O&ab+ i֕%|)QqiJR͠&()L1y(XL)1wLp R ]Ep(o 9:C"u֋4Q/:\y#E\`e5ȽTK(4&Y,B_]r =,KIAOd<> ƨ7akjoA1QEQI #!C1Ņ0'P+Ŭ/:Zʔ(K2*)2h}1P,>H)"bRoZ:b;Z6z^%{~Iz^jhy6U\[^~-o_>Q:Г1+"_ $̻l % 3`2h4d3vэMr2;ǓQZVhkfzo C@N1"(ˎwJYT 5WMIrd i-17jLMHAKqX d:Y)g\b6,m /,#4{ /kL&_sIy0@&ZE}T{%d5fqI{s+ y-ȠT,җj1juZv2'=yE̵EB\@՚ ]!"ްfTU|"R8͹w[k{x7g Q3N'H; K .;E2JKBb2ZF(m,"<ݵ:gm*|Z*eCFJ>&J}TI%(^(ʉ 0 @VeG'`"`Csvc6&m<[*vOH4?Clr"gK~tB/gӪdRfbņ;@U fIW2k(B,T,E8tsoxG'wI%eC#$JkWt0A UF3:k uIDјT`;J9I:tCGQEdͥx/%tFc %?={5N6=L~[U`ch kUbU҉Ƞ F*C9jqFP"23FA;(e6>YHL26=ӏ AIRK$cKw cZj?6VEu0䛯P@T&ZMEt>&03)x2Cekj,%:r5, 15MntN2flZ[^):(Hbn&5ƥPyL_)~U|Jj#;} }Vm1=x;nVTOmkgZWRXWՄꮐm TÆ0b2aDVh !j!+aXuEt^:\yW?Y cAAuZ*49$%ϊ L Q,`bQٔRԢ~ cO(0>ʼݝds21e,YzvtMpdjajXT;5$KpxGZYQ z23T4e=,#ʋ+M5qc i"&חM!5! fE? bׇKˠu>ۯ4e~M>_97ɧAGh-d5\B5ٱֶ*?.|LT}UR`O׻NZ>!3֮m?vubh󊍽/gefMr3YʍrPM9+UflgeF%-R dt m#9ЗE; ]cWOlXQ|cؘvmǎiJ^%Nvۄɥ}]@<1B%0.tsR29N;;'{!*mr$7$<=8 IfUBsSPNAgd4cwtiɖk>>xO:Wqscb}fsEjtͷN꽥bxNhgsVd&f.ǁmÍa}u,UZrYc_f>4׵\|RXԤ{p<-pR`my5%b19-1CI+QECL)P"ŌIMX?Xv >a4N5}ƾtev.Hκ3H!zݥڟ( b6EgqMYZm}StҘnS - ( W(fb*ɥUVs,\@$v6b* R ;\e)• BWY\s1 J ,%\@BbK n˵"TzWmUMZ]N7{YczD"o$!;UoR. D4kBpYuP&GtV_GݯOG:4bC< ݟ}F< 8&HUZe[ qښ}~>7˲O}suަsgڮ&(Uo}$q6V2gkY嗤Pk,whx{{;C/=RbP<C?գ~ٸCx˃Wߣx>_WMuϣ~iV d@%df狗mO?k\I@ >Gı3{fXH[Bgv0/imk}TFc[i\_۝}ъHtimqM/ѿ}2m]z9{'(Ns;KG좓0VNmKHy@;6Zkp19]uQ 1Πˊ zE )K$䝯@mrEcXo}}C"dt$AF D46rb& R e@8Vt~D#Uh |p][yph7qɍT9Dރ=]\hwbzr[ݍ려)<}NN5Kn-U_./0b1_ EQVg .S@jF@$F4ĔKJiA$$EQBHΑҠ S-c`\ǠR)lB*َYIf싅0 ^sQ?qz[3;XWl\o?L8:Dº sF:&A%q.H5b22I/rYnx8 9qa+pJ- =b 1QJhJL.%vvaX .iǾ Pc4^Et_jEI@&"=q4#=`(َ#NC\kϟZg1-0.B.nC3:\<. ӎ=Px\og<<mi2kSȭ+BR$J-ЛJCKCOD).C'e<~A%^' v ErmSES Ad(j+p9e]Z䊴}oFWdmJ00RB@K@ A`vQ ڹ+Xx2esC'"&"r2; rM%*{}?j~ӆS*Fֲ.αcGbG2=Y`/fxGi==YJN=/p8˕Ks{T:8kS_fv4ӥ8 [61  <%:Z.[h%rj-|Ml1.ajbo04>ٞCɯ{Aq/Ʃpͺ{_nx}gylXK ZZ`MԖoɍ>6.^=:~.LewTDTORDT/ ' A< p*iqᷧ*H*a!Jj#R <2P{eP ,q*Yʧ蠕aV .٤(L@rH)Fe:)Ei[))\Z@.$'BHψ1'L+Mxbؖg{*"t)D s=6L[YN>]N5͆= 6:z<&<TZ/5^YXQ򟶕U^ N_[w,VI%YJnK%nsJSx6[o-'o˝^ԄF4Oݜe0oǻknDZs'3YҌۯn,^6(՚{o*f k_}zd2bYɫ֥gdz{MoփvդKȒ#E/^:yu~uM4p ^rwJy-EyA]6e[GeEm/fq5.K }hZy{MO=|,F~/Gx^w YćΎsיgqԹhǹ*>2sBLYM2e|-K.7qzпҶٱ}(f kU7Aʓ΂wD VN9rRsn%F ݶբ!z8g"\AvL fa!m.QB"a(M1GyZ#BC͢MYhrQn!26pR[&#)7gq SzҦx.W_jo BP|Ÿҕ+Cf4Ԇ5obgt Or^wUDy6Y+80d1Ic!q/Xekry|e3ONyB1͢J 4UӢCbeY1Sk/&]eAB+/}5gpMEvĬ3m'vkzdPAZx1j<1\zdW$O32hmC*4%꓌AS8uz)[6.$7iQh 5(_s]Q/_ G=0E#DS)%E1xXG`-H'=9牡1n3^oZ Vx9μ^+%e0Q;NwtF0>q}ƈ^z"X^z"WcKjdWN0U{l|,feRQ&]S1T(2SJ p#rJ!D3|Hyu.k\ ]%fL75I렋ZoIo*INA\+aR/1",DD%i""&ZH0<)c"[õ.7_ہJAʹSJ3ǃi: [kqu?ar9jb~?hO;.r Kp3"اbLDX,-,xGI4 GdೀgI:7\M abyn u-h$ )e/5]>2CL>ry8rYY2,Y0'",KtςⅅbW,//%,$SLjns߮'߂Ȏm:.tR2 Y5crXqN~,$%4\^]ݦ옦8@֘ sjwh6stWm#b̦Ir~a2MX2@M=;j3h7ަtVcRT]$80w+45.n]NDtdW3rS秫 O(/|Q^$8Nx;Z8`maUF5!Z bJ Aa0^ i ʰ;ͯٙ39LpCo#}*0W>X<3~**C2\^|') Wy<8Fl.0c7QF%5irW z8:?]2- =0[Xh'(D,JE0cb|%8uVaM$bm[5Wdi9#RG8IE$s2 *pJYB' Lm.7awa݄-B0a%XIU pw$"RiUH@i a#d-SfݰՆ6$Lyp (~H(J+FRh" nZ[Az2ggsI8|sEN/c^fa L T?1?zb{U-8$-$%R5H?Jgzz#:O0\_0W^ovo;,Q"=qt*+Mw"K򮨖)?-Z'8 o!] 8<;z{nc&{ZLlP0ky]?eEbsasʘsLjyΉajg"qjY?%}ͧMzN_5&Z K߸Z]'~@ڼ9s ߛ?e4>}vӻ?܄-{9X@Iq_|3T׷IJLVgHsG)lԼ_eθMvR-}kC*N/O{C>&~!`4]n3is%0+|.o.4*"fWN̲\`,äQ(fe=JnF<+F^]ze!p`xU%ظo2k?8LbSQPi=cDC187|;ke+ߥm^(nQlw ?}bQFyR5K]>[F˶̦SyGJQ@ľ^06@4 (_weZRB( %& ꅔJsgL5vʅ=cFDHGr7aqJ?΁T!ZI0%:`N BKL(TPo=U F(# '|W)6Y'duB3BP矲Yeg_za{:@d9dW5x@%6Ub6>HS0&s%eMFiO2쮧xy<$Wu :P0vY4a#A29HysQ %c:i#c`*2fOa_7_xx *>^aq 풁ySw f 澡_66>x4@Vvk $ٻ OM@s0@ZZ=ok8Zi՛zZKi57Δ؁_E"Xb=UDF{]jTa%2 ٮ*ܫp/½im_t&R$B48( 2>86. \7 Ř5KiVr:SX)Q1wH"bK(F[ :aFhNPƎ:#gJ^>$\6yZso uy^O.͗UO^u\E!TV#eRkRV#2 3-0S^k'xinB.?wm FNL cn@5O,LLZpQ,oҧ`5oSM[RݦJyje{]XĿ͞)vq`#p<7hXC/;p$QPJI *Yc(D4u Q#W Dn2=?eFF$9zXG )6$"YjQDe*8n]Π LG>8bB\Y9Fۄf%O<6B }pJm`<7Gr R!hK" ƴS')Ep2av[ịUh4BkM̨3hQ9b0XJBԀPEL9%Q? XFqY"[SSuILQE+Y)TWon;;J͈+#BJ)NqGȑ خ"Gq O̳(ňƂ j`/.#Ta NrkevwLG k-zg՘ID>U4zl"8hnDtwvFΆc]UC`IF| me)[VLgmN :\UW3LrcftXͻ]]W}6`Z{D$LԕWf\EZ9s&o bVcM!Rn!fgKGڱmwjx zy\4ٿSI#RW@0%hU"cQWZF]]%*9+5L*GXJR~*QxVWP] G`EѨD.=u &*+IfǤJ*v,*Q填+x>B"רl g( W"m f뀔8x7xt]70=A=$`"1Z<ފp!i.%Ght"͚]W٫|jZI]QWbHԲWWJuE6zQ$plGطڎ\5G]mG%duOĩIr7t 7*|}RTN}`Ax&fxAj"Cxg.Gu`D.L}݆YRIT9;y.rTj(z:h HBNJ|SL.WWAb]c9{Zɓ>0YaRy  n7^TdYNΉ9:'VXsbuNΉ9:'VXK&[&VokZ+.(e 2\P pA.(e t FRe72\P pA.(e 2\P ʑ嶑%aU43h{f0{ӯ¥▱广k ,C[a*{c؟\Y~^)f+-oC1{WFۓz!a.7bQ/,Zd3ǒ%v"˱Kt;]&zHӾ ԃ\E oOM^b~_>9+V />V}gyR\e/%?J.%k ɖ/5Sf6:%hćGuؠUżvVRaM6oq+M~Lgmqq×c@םG&kJE1VfCJ92xX1Vج[cnN*1dT)F^_$WlȪ7yD`Cƚc4TMga 5Jny ob~-,$rr#nybxa^ ^Zi\io/Ҵ.Śͱ4q`g KTđu=b17Hh"cjm9-SBTgWTjz(5)HFaَwgaP, 0XS,Q^6N_gŧEY|_+Gl *e^A3Z)rH>[RSa`Z+\X.7T:9:lׇ@/mjX*MgHУIf\ jvqF%SLX Xג٨+e hLDIıZz)u(6 1U; j!+3VtkT#KЋS J ӆ5%æFI}AaPDH3"Έxƶs,j.Zr&'RFD"D.&CPV4e[n!Z_#q&\$H SPz~`D6{Tz'b]y|ä䡸ȃqg\qJ/Z\=웅f3B˜EH \[]􌋻}a@&ThS6EӔ`譛EEI)YP C(8bd *mcε%#-6䉬-r%] mgQNJfP@iU\ʔG56 Z,;6[2Sﴲۓ\ZkR} w7v~t. D$H!Ygq4$n.7۸w+Dr6$&˅j\LbYQUBz_?vlpnv\:lW9nW"ӣ k=@F5gB,% U`Y|`$v]Wb< Qz![_b1 (1"-=N*8 Bo>[Rd(=̆nm'CG}U 3ңO"Ol=(z01TPF_Ɣ lx=yNՙ}ly^ r%ő+@lC5g>^Ckʊ4V %9#|=.~3}o9^2Nڬ+T]T 9{>^̆Yig&ؠ#C'C&MhewOP ėFxWՠ/OiM_9S B 853!`KrF-J VZքC˦ؒӵ 㷂Zȴ*~#v\2E+N'G7fnSX-w1Ղ9ø:d7+U:+X VEnjvޒ |u2S識3 K1*Ow\hM8V\eg,FG$F_>_z)mG\4;`%yz"ܨ3;l,%6OJ S-Syط& ހw<ݜRn|Wt3Ɇb&,d#Y{j?A(8JpF!%i!(0x )ýǪ) "yUuNMUL·8<,RpGŠ];=wrqpP͠Q?L6=f_2eW+oZJنjWG$q 'ck{{Rn9x/'13V*CȐB>l:萦GuG#*78Dʤk|zk.>K5~S#2s)\/~(^>NH0EK`CU%)!> }R>㬹4wig>`jcCYG)p4fLRTtDPufjs\M ɴ.*ȢJd8v$1אP#H!d&gæsO=[@>;z]  ƙ^1 Kʃ%n\@DQVU` Zg*.8fdLj17!V P#dP(7CRbK.4SaTvl'])~2`jm=6|)@*ddōb&N%/f;ث0x`hОS ^ۈCNh17fJobM6URaU.uֶqřL}/>in>LD|u"}Y>PJ\}66Sk'Ŗз۵n%G I9l2 lcz\iZj&.~NЋE_XK:KL!W_ހ$}zC¼6ff[9lo7?hK 4rJٚVE$&$vi |bXu6,-8r r&-j"(XޖcdS/~MBi 꺿Zb M(bF*,F]Ხ=ETLjQl)KM jufܪ@5R0ŧ2Fc>dtoURZPP]jD)TK-h *W]jultLJs/ѷmuΠLh;_Wœo?.} ')>UkYͥo/_/ފ#}?٧GM9_Xzq}٢l 5K>r!˙uf*YBRϖ \y~͏7Z9e)}/,izkzәyeMM-Zc{{2Ć}^?r?-Í'quãj}~dq ?~oer.7:h {:2zU**t5ꇳVoG!=pSusC8>w#h#&(&G'H&O9e7sPE(e .̱l6HQ$?Ɏ:,!Ӷ?$լ&bOG2}]YGMJ kh*G㚋R88X498OxEXpzRozjm niOY7}fg.Lw%V:qԇ)*Y.?u4 x<6{$ȯS uv٪D(a>o+_B2QUKW=f.M|w}n1/cwQ~b#YS'؇X]9a;he s_5AO<׋|6SP_eO|e˞ ] )>@T6Qɧ˻z {p͏:(:k񈿤{%_5&nߕL\0"L_Xh>?ͳW}%͏ٚdZ*~Pyn/FۯUJ7hgu߸gԵZ[6] ^ZFB/K1bs&7-,F8!vV^z=onkۈʦp}m2/6^۰5?-稗몯<>M'w\0l?Btܭk 1|zBӳIXtj`nHآ@`ExΜ}ozͼn\G>KV/yYI}~UT3r~\~ ORw?L4 bLmHEYEߙ?i4O%;&VZͳJ9&ł0hD|ʹ@E(b`p;;~s4Yh=@߭0q :J LR%),aqEY/k4Y4^k?H+]e?n^oI‚OV OGpS|S0t U0-;߱@ ˜ 5񏩉WC}fO72[T2M JM1a1ȶw/ yOdzƇۚ6b(0=nlBHKZeskZxu4?&24U ҔNFCv 8/vkD ؿ#I?>z2*[r 2{*AI +Cʐe1HrNh`6,VK7G؏Ij$-2*m(`$`(&m`(8,׌rKO/w읽Hy} 7.ߞiRBMB۷^>ŴEyoϚ϶ f^7ÀW_ja=}t>㇩3. P: wt1E NK5D:fsẁg!ʤ`3!g2|pQFH]DF#m7>YXɹT`0n|c{7S`cPt؏鋷o"I7+:S:@TƩŎ'T:q^X|-vD`wH_aLeJo첄AE!Q1.SPeiWy׀B I ˘ :"eR iFNpVb ѭ6w&W_>-dݥӶaBPǪ9ԣ|QWuYwDGƨ.zaP!3pyݝRO[g E#g b Jo/ڣ'Be)+sł-֐G-c&s=5twICTTU竦HϚ<<9;:԰akؘ/ .btu+: BC*VCZ̾JWt8J^K#z:u2(2,~-ΥDLHacgAA7KdNo'}s'y'ZI6$dm<қ$ `R^B1)< rfۚK̙"H)yHRλ+RĀ:Ee;ȹgk[UyKރ>ך6HH %!"e"C4HCƑ4H.!;@h,%)^F(dYHF{FC9Sȹ[P/g/x4 .MZ3>ٱfECty_'+w̓'s72WvǓ1Xp>T+yUjގ,T-*)%t9!Cc,\,hw(j nOu˹QQ 2>AJjsflUf.ut5ENN+2~w3I}av/nx<==g2F%@DebpdKJH(=d#5%kexH iUզ<>C};( ȴ֙VܭtB.hfܱv`omd"]J 3꾦ٞ!5u($ʬBJS>,ƺX$P{IgǺ%2Y'1si1$;مX6#n};oy6;h&GǮ7ֈ~ЈFsC(9{$vbFW$0ψ !.;BU7=:`8uRUlNg NB[ZkEs楎R)ٌKvՋX/x60b IH(Ҙ:CWXBƂ θA/ыqǎЈp'Pa몰-OCt{yg4U_qe/ξGPBՏ,TEc  )ȝ2v(XiJ[ڈ>e=r {a\5ӻԖם 8 o}q@uvb FRgEF(еu5̉lJ>kTD!P)2.ӒLȹG}:շhQ[7m c[A(rKL&+D_ ;VOk[rA EĢ]lt6n<ǓAjh IHZ#}&D脢hB!Tvl)ReMF0PQl*gAʢș31*CmJƬE@XΚsO9ۿ+Qs}DLl[y)v&9Er-I%`(Kf-PQ"R(T/*DC^zq$%C߂Rm"|NcV3Nƃ4Ikmd++Q `:*D7Ґ2`HJ!Aq4Q|rz/if RRm}H: })B#LPS.^ȼ dUx5۫8dWhD=WÚx{1tWP(=u(eM~mH_Ul:Y|-7*I"6:t | (2fx34+oxڒأJRHʔrYd)@]a`Tɦ6:G0FT6DA ђpکhuGr^0Jƒi2]=6#&Ƨux:>J vx y Z%S& Eb5|.hHĞpI ?8,񪁎 6RV, #9PP ȹgCd$޸dڱD(hXZ؂(,Df J.<$2K,wH M|CnB!ա+FؔE0$f&)[ȖmxIx] %=S(X#;hYz4X}1m#feN)ű@A*ԼK&1~ /^eOq=YN;À^Ba w%4_(Ɏ^wG'ug UxbZ'g$^wDbeqvQg oD2wV>GY.Js($#,kaDf7Ǩ} gSJ.Yv`~׶ҭ/͙3OTnu)جA' `n6y!=6>on?{H)}NA5i/E:k9< gbtvtyt:MRF&h%DHNF6Ip/V~ %em>b>"B)) C"K˜9#l:bBG*jOyEK)ɱRUc@ulC(#f38!udz<ޱJ>GqKgJ{9_)蓍u{g0x=n4QL*^O(Q-UEMNʭ/#9nW3+^KYkX@bQJ@Ǡ Z:w{x/Z"wHB JGU{Aܗ/tB o1'tV388ʽGD"*+Q1Ehb& 8R*0PKk}P"q&892;Gb% 'Zg30dfA޳,,^ͳ?1yY~U%?&jjټ`M+3-'_9HP]Rɸp8Cn+,STy#(hj: O/_|1'|}-M@-kvբO8"/Ѯ:%rD3m9RHK`+%DL*֮ty9  ]H;߻Σ( \ {8O=DhOGJȬBx-Dd#S+%:lRI+x#N!Euo6kxZhop)(x70M H+eZ'9$$f5MNpg3@̠Gv:(A@Cnە;)A:E9%tW*81TG\ip`Spu?{73M=XJP$GDZ9 h!J@~#$vԮ!=f5.L2!]1fx'$hIئ>i8Y8Zgմ豌H~/Pk^vvOר?G*/0Rat)Zh1$/6IPbAJCLפ蜏bqu1 Oui3O&nP0-ij&0J$!~iܘHϺu\ںp?CQ[Zߒ?4OR{rr|}yuW>o~WށN--Vkƅl-0ƏMUvirs1|8f6W8vI{'#AP~6>~9!`Hb|~sw,.j i^+(@(KF?cgz᪙]VB^"=niYe)M ]>]_jrUKw$py"M?PJm8ؔH*9jJi#KF굏1aT b|ޠFܲ2/8sLLN1DjsK2NGH20\*TbRӜWneڿZxlnfh97#=e|24%vЎ0E򜄞=`xrدu)|[ƆT 6!P]理;a B۴u]^H֭k۠j:k֔z.hCL$]T/  F8(YJǠ\-cMl8~0"O^⋊8(\ pD" bnlLWondٗ́v́'́{0)(vg?jyMY!@[1ՃmǴ^i M숏1?8bt臣5o T= pTzToEN~ofǿ$l<3r84RKh?#oFi?If|NӤd6EvQI&"dU'7"/X4R{cyx3z5".fף{5 taE>zJ-/7fTR'LS@i⩖]}%J\(Rfϗ4H}>͢>&KDR#Dj 54(E2Ph"PV;jhdqq,<҂F0U&xˇ ۗ|Ͻ%뗎C׈B^+^Ϡ zev~5cԾ<r$m!cA)K-oG_t-ؘpRK3[JJ¿#OYˀJϝ-u\x(7: 9 PQoA@;CHRI(c4ɇ.nQqG: %f(I̙)8ς07+|n.F8_-%har'5*8+*,nIڡUބOKԎ6l?hSj'P]E/.X~7/"d}HUs#n>}r8( <%RC%%^Gp 睇iW(i…ALӧ(u*%8g.h}A:fߐyG,\05KTx:z||G>!kN6EmwkY -<҇t{}x{W&ܹ1$띤T15CT<$ vLRA $96k77T$52J1hښ"uI{ST)Ƅ9pHؖ-5q6[R匆ګD! ,gB%򦏗_|ms >(/ 0_g4|L\H;qOXq" /]1^E&#JJ=0-3(Lݏ_S(X EUZ`Cd)Y*$LL7׽~ˤR{1_rwPL@%GaQb8OXÞ2'Rx_u(pɃ0YgOR^ <ף~ڡ^_/F>5E)VIP؈e++lzF3ԘWjO [Eu}f͜i }L,R̛Li)<1Δ <0~5eJ;-~WT&(AsJ$F (̊C| W5OVx4Y\-_G9y,%םG=th@WW(| C,.WY_Rn+;'E✓\jx|?=z@.E` Ն78 nR8\ & |>^9RK8[!bx09g? c/ΒQg= AVcDɣ4<ΒuQH($D%5`dDsM:B*hʜJd"hésQXacl,2΅H_I@egSGѕ>1cJ*SQZgY2Dr8$yS2> qv}wuxحgGwbqq(&@՞CPۯĸV:A)eJJK+f6˳ 5o4:;(Y: Wr0QPh$f9)O3t BԌ.A V@ư }xᏽA# ) OZED"C6rx"2yJkP#e-Q"4'A_!Z橏Hk)EoP[gƳQi){a8`+3 /K,d!4Y?zlʃ$ȗWU9y[^1ÁLy`rVgzۍ_`pns).@ y2yi\p)Bedr)jhLٲ%Q!>~E8%KE(*hQI)s1ɱ @@TꚐI],hֻsQC\T_\.U};VZ4 Tkdl&ndlUaa38 mc,= o%xC{~E;ޠ +.>3b!f*zSmII:J,&Yo2QQɨٵ Y iUaS[?Ǣ e[ٍb\SAfq(jƨz:цIDr)$t}M=Skc(ybz%)RȞVyJnXJ`/:ir5l}L bHT 7g7Μe5 PD#V eʱ!:NШJVy"e!9r"!-SԄW!ZbN@31,Y4wTtpRl٣9PG2)u6ӒCq"Re> )%xCPlƬAG]$iLBSgUip R A;a:E R 8yHA) '?Y~" Azm`t&D"x&d\Qt1]۞ 1I@Cdё_)"f+|9A*eR!Q+qvsǻtJ%]57f<7[dP&4]=y䯟K3SY4ȮɱgG/ĎA+ƎH}dOδ$=$m(%H8A-0Uz<~\hgӏWZ<8Z떗x0wll/ڴukIj*hrX<3@4]Tσ4\q:=~e4[i0mw.~lci|uaOֳ8"=R_QbYᦇ?X<@_GWVkX+Z[.6ޢS[ҞRrB::$g9>Ɇ{N68Fz#Ȃ3)ir0Z/,s*uIG̐miF#zKyyF{k9wWw62AywSazn[gy{={!T59Wu3$k;f02t^ߵ ɅhT1+Ie#RdNJ59I)P J&# J9ERh BNKuE3qHzB۞ʷ|˰,{UaxN}DΕ׾ZSe 7Y~[9~S*UhgS:).lB"6^a%bOkfeKdȨShvlF[n3oς{;-YBGmBNM "Hj3Qʰ&\&%Y%h($+2Z,Lʓs(MIA≜6uc;k&Ξv6ހlaKyna5T ˚3 <)f'x1,ұ,SEݛ@KV{ &a^zq$e6&y+ eC)TZͨV~R'STFȥf^htS"ϳB4ȸaM&k)쁣 pLTBā! چ$*x:̀ 6ٹSz4 *gY ռe'"PHH:lS݊v=!>TPkP-F[fI&QR]@< $*E<ڙNJTfru43܏z=k ́ܤ5 ]Q3"Q,$M^ jVJڸWSk)[V)?`Y^%@ٓш&6Y1J"Η@@AH?A4qO ݆ܐT2%,AW<`T!S6(:==D@$v*ڄYaRqwF]](=d8{]Okˋ0}_S.2og6XL  "LT qA+[D<=G.P4Zkbc+DR)585ѳ1#d3)hZcv(KF['kl&ΞSS-E ;zl jaAW"+sB X{H`%I,;,&|MCn 2'Rc6$$.L~KD+JlC1:i@%v5Sn}lLۈd[֔R1ڥ&1 /*c|M(B={ [;)W/mխ-A^Z? fO_Fcs"֙ mDg^} @1ϕ{KfXV12bbTȾK. gSJY]H H>Kŗqa˙ H)m"F]B0APC,ggN|g mg9s6793(|VRyM.R(HELY*DVP2FhuĶ},Z)C,$}flr$]N.}MJāaxiaP`p`z^~cnYbsz:ϦbWP3ifZUj/uZdp)Y!Vw0H:ZB8OVUO/K ?0HcHce Qj5֐ SjwEFɢ038hj A6e$f)Yf%B,VS+kbEirҪ!^*k&eb!^CeL>qV/!!nn27Kğvly 3Ս0PjKAZuA4irHA9 ^@O๘\|lyDg y\ar_cLć2R*ȮA-G&SzHgw t8޽0rzO^(RF%I,@ ֐K)P ȃFk뭾՟)ʓ1ަlv_т8T2KG֞0R hp:EDWo9.fɬIN[JQJTmKIa#}\IbD)xաߧW9/~o :o]wʣν-&D[$ZLkw_;ytᗅ=  ,RkedžZڭޫ;.m~ƻb=7.Q (|r%XpvGN[2FG-W/嗋[Nexn3o/NƉѤSRL,"*dk CZF{SR6N'9 Mx5k6ZkAi!uݮ'# ,R9FU[Ӄ^ЫYwRƁwʷȿ Ozo ON8`{pGtݏ^(( 8-",Z^6ck~F{Xב3Cl&Qֳ4>:vS8_^ yyO?~|}")Y9&o ) bzAaN!bPHhzsR9ٻ r'GebCi1zGy15`TQZBjA.xw@ #I9KG. W+b#Ռgİ@Vjn${=SN4~y^X:`Zz2ݵNE-,ՄIzλ"bF,ثq2z,_=enu|wܶԱ]ߗcV +Z>량 cfs=,yߗyU.&ej0aՎƳyMcHߋab:_|d^s먾/y0۷:ߥu=]Ͱ߿|nBgm)}u$Fчu]ݬ,qL?_L>p:ɇ߾MgNd ojiX7?[v9uKwmEx 9Dk.9N K)}wӝ iq>O{DrCYj:k֔z.hCL$]TwItOZ#UHn-U5Ae,nW?W.B.6y_\Q3y#By4wc!a @h|=Zhẃlnv71{1Ip}[%ۖ$*Kmǰi=R LCxu>M#"U 4T:W1ZLkTZqJ<:4*P~Dz|Z$J(ҨCĆ)Z.}ϭ9ȥӛbB?q!/lyMgx:Џyè<=J`zLb*_}$j7{Era֏g*HZ;$wwEV|ɛv?W "Gr-^'_ebczz53,t?k w>}u?+R8~Y&s\})n^1gcf/Nz~@@7sDtv åq<k_ȋby9us7E8u0w4C}a?d8|4gOigv5M{=4osYsoirgX(~`kl& S?MƟ'@WϽ"9BΣ];&h-D[Ǚm%%LhD>}2Ck׮-D|=^ʍNBBNǠu$DYP[TM&6[TB1q2> ѰgDI)oD \|}Za0ɍHb6 oIa%O&r*yg.5]$KN[WXNz[̷i4<_)9ss|;(s4< ,8 hNC  D$Q:FWYhQީb1J&ij;o3 RH5(Oڄ[#T¼9;z=6?K'yFODh'ٹe{n>t^kkV V-i{.V Gu7Y[S;2ri}"l]Nz- >+49L궿`2R-oOn-n*o|sq'/]=dog m=tnb[+ಃR(b[M=HJVsVA.{ReWA@{E))IAap$4#&|'gB@aQ[].>"O~ɗO;#ݳra4%^AIH4cBd%(.B*DL{NJvD(E h>N\k҉"QiNy{n]_so_p$kK'lӳCE1WE塃MO6 N?9*ova)F(F򉰘4R XLL)&K`)RBrhaq!qz^cwKƱ`c _bTrKJ#c1s#c9R }PBYX3xf/S?D4]__Ogs B$iW 2yOsAhF!F$9`Ur`1dgF8YM._з#$&A[cB ua.fvvQXr1}Q[FmUQ2M-^%E=hTv_3=R)bIJ"3q摧xTc22C E{b H/K$Qxixt㵋;~4 b/"ˆ+"VD˻6 N,)K[H!UTL_vZP#RQ&IEyPReTGߑVQy(A,bU;'XK6q!7 }LF @ lV95ʹIX<Y˵pAcH-MEdl;c$:'|F8BdiW:9۹m@z&6t{W;MTʄwlɹnם)O&2= bGbG3Ǝ¹=ac_$v:/Av6tQ$kNB#^k0; 3!w X\\ΚQ7v oKqIa-c|U;j6Zx\5ns6q8_]l.Cs}G 5ݬԫY 9Ǻ柣h4Wϻ'WfRwQVcGcE֢A-XK- +tjPZ)#Q*TEө!84[-@ L{>J%i)<1 yE9b3TkeWE?),%(at\`R 0VtJT@A!T nV<ʳɳֶgnV9Geb|rXPl,oRǡzh-ϞuW! ,]^|q~{e{6l=|TH'fx[O`J ~7Ǝ}ɹK_~K b"UKi Eg$-oi\6gg_1gcT/N^7l+߷zfЃ) 4lKLMw_WWWEmr e;kM?^U:>ۦx)liЂpnA9#*Wō ٻ <' q",REޛ@&;3ί *yfh2ڤ"h?Yb) BF"$b2lOZZ~6I:ii. S$\T$ )}apVp7tTN뭢x=p8_v%IOk:$=LSNɱd:T4iKL+'s&L $qDy *[[{m>ߧ\F3ZS_آ".(\k!pG3)n%ԫ<ij@X䰖 4 UygK2֩AiCLF'|'VoSajE}y%W&Hc)6y-_ض?5qR67|BRDKJ,X^dcID 4Ii-!)C!)IPEx0喦$"JBs+Y2.yt,z"%>9cݍmO+(Ds͜`j)I!Fc$NGPyDn=&܆iqy9v*_0ax@!)O1$": -g*pzv~F'9W-;\&6F \h5AA58eT!2h.h) moM7ߦ ]-ڀk&rGAaJ8T(˥r5\2 E#t ći(~P. tC]]\hDHy4Sqs8zj*@ft-=d@v 5nI`^|?uƸr%c5%%%|.d1V}鑿X |Mpu&Zh={[{asmb[Ի(uJY>Q>--|[16NiiU2cx),q$ῸG^ }vdٓOUA>uyd0 $!K 40wP@8h)iڝQ)b%+mZgJ..!w65T~";m[lMղy}>Oeh BpQF ܸdqkp$gr`=~e: C遧Of'G_6=Jn;nNzˍfKQ$vv"*EL>H=Vxc5[rA 9u+2F@qPm(ӈ)P"ŌI e Tt -Hh-czPgns=y.flgrkKy,Mz)$ Riq~OX&AO;giM:p9O "JCDZe|AI'%As.Kԫ-U9>{aXq+P^ fE҆2Pp)w)\$fk}sfj3iL|D6L S^|ˀ"BT\f BݿDpcp(KѼkn;ErPNSNvdBqbNG\pd0߽g`S8=x/,%^j?Fu*2zD+'*!7v !NItKf ѺER HQ$QO~,_-Fg[[\Lk$HpG8jަ";NNhaWg {rnøAFM`)$ct'WSwjYڟU-|~}q@cv򣷹wB0"?Y5&S]*2,.~{H]m1$ϩcI~[bI\JGp ԯŻ9|w {Y Oue5;T2,QY?MN!$$w> W*;Nkuaj"kj}z~Ju=mZU3ɻ?*$ΧZ]īkO6"OuwϛlW>xB[?'-L89nr:8?(ʪmn=7uo}6fW[c9WU㯿nRH -چ4=BrJa$YϠ@`ޗl>~w7q qme( 62}FỲyG0~lM/5WxTvrzMD^-QmCrA>&|ƟG[\,@zbB ~,j7o~oM7y uתo]]GqM>*+TP2-'U.6no}ܴuU.+S@ң}q;ab?iJ)e\qJ\(8jPZ%,S"DՒ@_v>f0L.p:19ͤv8$EUK5eGH*0\jH:1ehʫuM6YZӉd-d~`lr'^V9FIg3Jrq&a`F?]j8*Y_6rKRtߪRmBN!}? v饽#T^H۠j:k1z"\0@ ji.j'"&OFj$JucPb ZOa0";/A8*f.OqF" oCス9ozl~]ƟGM.vn¢僘Wp%}[Y"*kmŰ6O=z&Bi:ӪP/^ ջjl-/^ńaTR'*K JIhC<5w2*eTtʨ=,QH ,&$KlE"R׭, Zn80 HuAQo5" Tz@u`a  $#1(Zv˵&.){+\~—> {1!?y!olK:B.݃ LM^ILWbGhIM3?Y& $?G?<SYW㝿M}479}#~2O9!*cI|'ߟq2yXiIjghd*UO)NLhKΪY8K>3{sR74$?hA. m k^'~u'/囅 E8UРN돪oױE}O`xbW2~g659 ׇ}Y`ۛnx>Yz9B\ ]7|Sj*+X*KL*K{zpŰ7<"BЕh*Ki*KXW8DB)eGWY\v4,-],5•`?"に,WYZ`]+!WW9ɑGW2gGWY\ʏLwU•℘c]`c,|+,,!W( *K 7WKUX髁,6W(-#*K ovM* ,R^ \eitRj7W\ĮP`j*KZ*KԥUR[+!8gWY`}=p5Z ]]PRR7WRHu$fj*+fg0Kk.]K"\)K Tx UHhy{ C5"FY\VYJ֮}0%e&Y`s=pr-,-'WYv+#Fˇ,I0U^,%oWO/?J`IimRl\\7lOŊ:#<wgdǓx&糽4KsRJ zy_:_ReaRWJ+øNLU~?6wkP7EI-ϳ ??Ak:.opYIF& 0m8L&G82mL\/9:O]aPsuFe5U,[n1LzW:Dj%W %&2qȜ t:Oc'؇nfno߃σ`dβs[*&ےf۳cWٱ_ o~hE鱛^{(ؙ ǤgN432C& b"ɯ_RP/Uz%LUDo m]GN4ҕC^LA<*zm\ǁ'UV\K^9MD|6"kyx⒡,M F4RpgxGȷsgm#%qpeBqǽjZn8q& D0F`"׋8+SIr0p. +lDɄʘIB`ʘA&p)ΨO9DY;m;Dj к+V,A/Tuva}7S̟p@l'M*/Ia!I8tO֧}/zuK*,3v4/̩Z=,tX65+67xfd#hx5nA|# E1䣥D1iZb?vwazl|1ywX'WJEls%UՌ3iYXl)X:Sy@#aJ<ISMۓIP:{WrB$ipV\9b"8I/7ALıdJ hM&}rlM8#$Me %ԥ:JpVŴXVQ[2]/EdxC4*O_3)1$tDz H bKAbq,"ˆ[Dlq̀N$6 X U)Mh V##1RtDU6F4D jD*s(8) keXܑrŬEaD,&~D|⤚%uӒcqEb4>iGkyƙc x!g^Gp 1IRS!mq .^ ӎ#Px8xx[zokx!;~dZ)KWv:w7/TB;q_ q;MNA2eqyպ✈ 1VNrX1)S|WX:xVOެQ?b8mbBo1%3{u m(B\˞ g-6!XHEdlwIt`s4@HӮ R玛tI%xdRa:<͛Įݛ=?滴'yr{ݐj<8ؑгbT%Aq&Ç%sϫx/ALPQUFZqFB#铡܊$}ff^\ox/r Aeڠ2г LY4q!]V5:s(m);Q*|%L)JuHƓ[vۅ@^G=/:WLk3,5RdGddPD$$U4>JD:ɜ)MM9u,Hə*ϋ$څh JHj9V:D[)XLj4~yKZn<ҝeak8O2/0A}Mv}ut+kyįEo%,D)i 3 \y/Y^M\dNr,Q2"Y!\[`h9pUPJ!ETV4d~DM@pG`eށIFEbTq2*hwc&as @]3 ^y?3 2yTs!xq,lj<6Z8LrZqLdI08Iy/`"\!FSx]hG#!S"J"k񅫄>QAxp;r"-E"ȓ{-;#[ׄF V$X4g*9:("caJ$՜ TkŬea5c] 6{((A2'ԛHQ] R v8tgAN{#t& qMM`70;s&Pu;\iUԙӥwvK` uzgwx}i~S;i׫U/^PJ2YԆ> qH+ Ru{o7V K3K?gS瀜eˇgכj0vrSݚªJ *jަC=ʇB'^Dx{'f%]z6_'莎 ;*!|~V{ &{\/s7](h{y34ƳAR{O`eeX&f3Nfjgџ|ow4x]{Xf!JLc3=^SxB? oV0>"YI:rFlD΢JLP|26E%&v|}h %s"i,g#B9h D5( T ]$?QK-O"5ЧjKWL7=RY^xXuŁ6&J٣;i>^ }x~k_gvy%PQK濢|OUfVM^+$9gC4? q7g7KBY2QKDtHY+mscnn^wUc^cgP?&Y u`Eۙ gdkgR0: ae7Bu?tP;ÇRxHc$'ud,9CT $UpǭT 8s <ˆPHuȟ]_z %o˂|c(5<]SGVjhdq&)]c:sHR+ʃⲢC>dN*'q"dQ6%0%@/j-^|! n,v21cp|d:͍"48aEÑRP*qaSHxZ%2E*rDWJ&qjJ¤4 ͉N:FtkXa;+&΁vVo_-v(j>B;!&c<4HtN&9~f"8I((s)""Ud 4ǯE@}Nʒ#͔J& D07K,@"jM":z,uζT~;OjIK NSIBJٻ6U5sz0@`q,n[悀}Q$d;|clamqȌ.UH֏,FKXcF+7 $-%!E_W(D'H^A& hq =PO&䒓Fay"fe1֝Gi䦆BOE:c XYqd$%aAK2\G1, !3Rţ;X hw!ױt֍5ps[nҴM^@Z Q }l)`k yG>($,ژxTAr#PdI,JQ<{H !)R@CTrY&Y $0\׊9U)IQu,{`;@(v*؈Сv $EI9{/%tcC9]?{0N팜85/7,d4e~`˃,FPh0Tِ1EP9hqZ:h[;mxecY*%ǫ:x&4'6٦XP"A6]WYG&9$IW.)IEڱD(HDAW3Fr%- C+]7lW8?h*tB]Y^p7,8dt m=)ʻBĒXtXT[ZhGߍIƴ R9@Hե=٤]LHo"b~Q W٧$ziG~!k}Vmqgr*pJEYΣӳz{/|U0.n9C6{ &8J{y+NڃG} ^n9Rz*ZF!ZcE1ƤH,ɥFG(\q‰0Q(D},*Nu kCJtO̴qH[>Ou/!IҘlA7ܾ 癠$fbg\.()54le!?쌾G=\=ջsu&w-E/j>^/$<@/ FoLVsL0 0Qq;T43δ|~H?j66,Zt<(-F[+)/4 yL*5P *`El0E`B Ǣmle#F+7/xd j![XXՁ <Ύ&8,ԁ]X*`Zzυ2ܵN-lQz QλY"BFճW~̋Jּ=$E4NY]+,Ye) b5M,nE[A-p2fdɰ>Q ୱm[ߋtӧEY96qOb htܺU"2Df*Ezn̾] 75V֏.O/,[w65%VYyjn 9U fm?ږ?:)wRa~O촣ٚ2;yi Zz!+6uc vQ~b#YRGlC,k1?g6'yp\q| gs_fML)AKò]Y.MǠ?v߳"w btƵ3We̳@T o*[;wG_ΎV?jXߣTmܫ}퍿d5?fY{0m37i=?oedbWՒ>()E]]Qދ~XfS,P|ERo}9۸gkl 7VmnyyѴZeXr]|揇w>kǸ_[f1_m:m=v<ŰY0H_W6yi 5At=^k$lPoEΜ}#pnl ;?ײlֵj6/B'먲eq))U.hn}ttSP+x2gI$& _s8aBӓh;SR^A:dJyVuΣ$(9CJ:#ΧߜfS:.Sjvn)%8e\%,Y]$TZ`dB6)BҞŁWfqp?&rKu?HnW^FůA>"", }`tJ#8]mcT||K+Ѱ R*M-+H1cBҝ#ne]|z@$WzX$.ZH{/eGYHE@o1 &+|)B,&4A:J|z1veIaju|ȃ +)`m4w\7-lm1<m@4| n>Ȧ}B^F߇IEAm_+`mn;W6aml{LH !=j^ :Ok)*^iiJ#Ik6 8/kH 8=,z2*[s!}*$((3dI,XH&A [ݺ(y}|4YdEM" L QxK@9Y]:vuFn\/{&]ļ2k>ȕb8~B=|y6_|;'t^i%m@f^WALW z! (_ T:R*WjQzEJ+ʿŠʣsŐe|br2 @TlP[ mhcG[ppV= U:M* KA  !Z '*jmPDg 2@V˙r(/|sN)R(|UDJ%I*P`kiЎ3kɲon@m0iBm̳3xX{]G-}^!i*;1>>fwt=߅z|v ?\TٗȭM;ih2g?}dXI9nu;ں=Oheݕ0f!le367zj󹱲G+-dGNmvm["龑ݷwm6OSӒFO "l)εC%Zn+y\Z6o-pmQn{rŀ24^X@&R(ǿe4gO{:.Y"I7X\ d 4J, Ȯ^6{DxdW?|4!ut;ۯ6aj੒iص[mo<}#%`'G%⪰ȵu\3XqD5vݧ@кplgTAh+DgV!yψmAYl4>WP HBmWEhBW=#(Fgr҇PE*lL^g铰Qxblr(6l~]8w圈np1ELmYt/?9JjX,κXz"xҜl2e3FS (M >ڌRZD5h"y GK&R&ZM,T% xW[q˩ǓeS&+_ضmQf,ɲR+|AA`/&!_J~VC¯T* |tg Fr Ur_ZRE^2잀¾ue< ;tuTJ-{u ՕHU%QW'@ޑzyR1ՕB ?$G _ fJm$o+~h2jh^9(709kbLJWla[SI<;(񷥋5kBQz˫<>8 ݯ(ߠ0&+a\9gŢ80+u6>)+Rde0j6ћg'xRzp>/|rgג0|7pUWv$ 6% Q-5PѮ$}+ 2$|M+;c'9>(J()$QT"0"#aQȂ5>f+r4>(1M"*Y/y!Ӱ=p}xvh=om8w!?<]fBSV9yFAE avB&Dae!\9z({+gP Ҙ\s% Asd5ّA9R9?dQX94NߍB ?@GN^NǓ=䌏kwn;fj FU<;@}(IZUcQh~ Ql"(D4Dߎxr~zQG%7Y$& \2i묑=fƻ~ܒݶ,mXa5.* wkwkJ6oz['5Ƚ x-\lˋc? ,Gw.7x0Jɻ8ǏuZ*i8N6G]*J<M8C)~//sQsKD7ĝ: a{__uXtp6=ʥd)ՠ,)u s ++uуVr8m*- T-A80×( ͪCQ&vGޓJWgM/^o0fBRezi;F0;CnCI"UHYs"}7:5^#-gxןw͠f@E]suǔͦ"yzqmotXx%.#ôٱ1S}k+4XsG^,j>2HME{ ڨaᘰ63fy;ɤ邒͡ڰgbN?3ugK3w< ;)2षhFQYI&H6F Q72 5`O`-ы[bga4t} <_m0\OmH-D;ady4u5{_2t)J Y*źS3y"-ӇjOPm_h#|+%Q3JPl "6Y+80$Y1Roʳ>(+P)9:O>iPp*ͩ?ὁseYlx BKπ|gf݊[3[SU*R<w4Ώzᓬ'ʇrM|fAZstij1)G.34aIu2$;<)%cy:!m gY,׌3"!GdzB``LW޳$ &QEN,7p6䳊‡{'MX gƀg x@ZFT]EиB/dܥs,' \gTb2jRsp!A% d4*B"&z,sTT-?줃ds)R2\q$Ž N %Sjz(>Ap"8j7j~]_(H;*4%ғ^Xiu$(@"R -(ܶGaObcI}aHd[S=3"vGF`J#ԙzUKijQ ,갖 4 U=XǗWQȶܡ{i>X7!7'!e5d 4ץ("N `g 6enGդUU0xZoֺ]]!'^׿gЀ5'^DgXR/n׭8{ZxLxyO8"ĥ JA,/M$@"DNy-j?{(8tԒE7LGf) A'4S8CJLKHɹOBߓ J<\3@g&UٺhSB}«Wfr*=N'܊\j7%yt#_S'q7p~ovys w9Z]3 2E8BVuѣ:VYꃧW'FFA>Pxth+ơU{l5Vor[ZSgKm@_ɆF]hjW>"̊zE5l0Z`zߙ'i-|O/ˆ&#+paY@j,=I(;^R[+RVQ,*b -ZaL% Ju* V)j)`=NpTE'NgNiCa)Ň켘*԰IZKbda7q,${OcIԶP,Fnun'V}:3[yvnhU|j}JOpWtRm]nz.O1/Ml0HeS; nqkq1#Mh~(ƣyel'leN;HM.i] }]8Y0b } NøX2⣲iF:!ʳpoo, Եg !zp(V |UL/UY4|xp>S7O̕goN_(;W&Y)8 %h)Jo 2V\> q"iN sW RdgB0K:`n!vX) #--#@dgNGc(%t^rRT|_{J3ԁR0H%tk@{PqH GuSԶq.4?I'/"Q#@Qk+ 7RF&bb|AI'%ACV ZK:ߧ8vbώ;в24 RNIfE҆2P\B;T4*1(Xi(u=WmkQ8m^ܸȊd@HFtb!*.3hX'OCij8I'άCc-|)FdbB4s=5kb߿tF:EǃR&Hr"*i0`,ZD+47Q Fꗑ)WJ1)7o!(3ǨfH*aj_8t('Q)Q_VnobV}"GZz{rRxď3h>ag|]櫣d2:`uh$9Gt"+N^ݕO7E\5uuPQ%[-k_;{ۼ;jq27ϫW4Q]0MAc$?'M%r qGbI_5q>Y;{Y8 O#; T0,Qu_; R ~n$oC _l8Q~ cP4e&"~K66xQ(T:W+o&$i <(~oI/)+f_Uև?X|X.0Hobz>Qo@ZIES+oVتZs]B>v._imq*EGw[*+TP2-7UXj§ .n^ǽ" o+P Y넉Rzf\qJ4LUw5( HhPo|)]"jI Iq0;3w-wi&C@I[B,ՔqEU7VC҉)Cs)vd\Uh3ieQzO+*!hy (:_8Wy\\{7| j'%GUk޿Ͳ)V"NU ~ l4îSA.':$xǗ 0a FI8}OY;r-_$.ݮ!q}=Aao5Q_"pתwWJ=Ȭ6f;PkVp,7I8Tإ"{r &$jlP)YˍLE-; &K PE[%Y&pT€FFFZ(~J(F"D2XpAX!CYl;KFbP<|o8[5ryjmory0ck=ktw+BN8 ݅B9\Oޟ-C]U`j嵭gW%*zW͇h⃦^ 5TD B%.75Tj;5U푚BM ,8-H:?{WGi``gξ YcJ]%̪TR|-gR$< )wZRH /ѥP.RDIДڜCr‘>D 5fͪ8 $ G2iLS6U}lj3~ӎNGO!xms"V[3t tVRWݸ20(}˾Nj+?F5mZ~Xw&Ok~tӷlr{tǥYM7TzUf)}\ݼn_{›+٢%x;oǣ~kٴٯn8|> P>[5(Yrȗ[.A4A:P)6v,5WϓP8AgIuH)埲4DOٸϕ;u)<[R 5FLR{WЀ U+fQ#/DNTIrIٜ)cd|XxMeXl[4ׅϟMWPۋA %I=MLFyb%%[#nd[dFs6q9{;$ՏMHH8s`s.%U~;7{o0_{ s:||'}% ±X:蠊>ȅX` Ύ&qX|/I<ɒKaQkE9f b"\cCI `9U$F .Gؔ?,]iso<#k΋w_َm7[Z^:*Q/|p9"))vK.FpǤ:gSi819Ja@ԃXQ&{)A!1|,\AZF;U"4fl{y+˩˓evս_غcRb,Yz|1(f //;$ȟ3}7phyw+_km>4߹?ƀ;4O2{~yJšYo˗ąj%OyC9ypqpɜݿ`Yvy\|͌C|>]V9Wϋ|fR'4vTO&0:v^@G,zƘZc0I'[& eR$BɁҠY] Fn{u;X;g}҂fC[Cd 4zi4T*GϾO8I!EUǶCGN`WWɐ :wP2qrS2*&Y$2ySV.dN@Ą6I":jQSzCҰ0Jd,N(O)e@$ViqFZbcGmffG'^i)[r5Y?yd)-=:m6NmE;\\9pT޼X>jh$Px`)⠉d@@%B$ \*ZMœJ!'W65H TkdlffdlUaaX,4 XS,\Tij[VMMV gg/gϯ̈-RVQ`)1JGiYL4SNUɶjӪ¦6|yɏ- RG+s6#6GFy(ݬw<mcԶj vmVڈ((Afok LY Qu ɬixX$pPGQgX ?Eđ1/L"rPmh} 9"kn@Fn^Dzv^G 핕Nž?8fD.gL|*rbCuBM g-}1VdA3(:&iuv1r`%zuVl!!sSt!x|Hszcq7E?​kO )%8PLI&]$H>{&)r1>cZWH\lz#E[<`<5 "_:Mp}~B]s"}Guof"|}r0/%[k=l>v˦glVX3tyzV]L(1T_ {;6˛(f|6>q`c8ᨆH_#x+_B?F*N{mKu% ABg5j[dQ!~30[~S^{Qx|O/stYh]`Ͽ>g 7Yl|%S\0PGQ"%p@qlA-!0/AR{?!ZU7zo32wGO1Q7rfapewIBB)q @Lh1sYs5Z=`#LfUp~(ìF((c^VO)ݼ׻Y~nE#yU:Aw /5TUA˃)V/گM0iC>Rx=%&)\0Iۘ"JKO<^ehz,s oCgRz*U 6ۀbӁJZ?tzqYw;M _F"V z)tAPGT' !8!֓vss>v)Ӆ[Z^des-(2ǽHyulIXcL*e[7mz MW3/.7+~975ũNB{:t 8P=QWBcK[lHyeygXQ>ocJ*9X*" \D1x&vEJ}FXۊ*P1]ef-\wZRe<&TNg-\E9[N'N(b2?2?Ni.ʭ%xžxIhUޢ/~r|ݚIKUƨR5>`OH`c4:AD|3Xʚ `!Fv/ϻ 0$b)8e:dm0h0ۀ\Bq2*YO 5!AP!YJI§lVΩHZ9EJb멲flD !Ί\UK6)3ac\%WS}E|?`K>x%Q ~S;mNXlU=߷zElI=hqmPip5\7V$_" 11|t6UlT%׺-╿7 vD妺K}:,ToWeS-Z)4snhu<֠z`#8jrƾw]dOi:(ʝ[0k(ܻ|99ڡ`2GJ^aG&)q~bU6ʠNxB1.D wo44s*vv.js"݂n7dݐjjY!//_zvH;EuC0eQJ7FnZo$%eϼTP!sQ6׈rZaEg;`PlQgŜGP̳~0 sO sD:δѲ]$tBI. &*P46sj #*%"r @aNYÖq}$ڑvq JB#zib"# FT8GbSQ 6Pd4mYZ#gK9+@f{k5 s<-8ː Bg4;%htԎNMI6EsJQoO`$A^aERDz9kƼT g<(tD*8 tVk;?^#>@>FE՜b#QPf V5 -blqF`xߴƝhEq^k<aNno" 0E$ZinD '3H/f)C@=`#MG!O$0sXd_- ,v~ O0T(cabDp).&r ĈE'\5W:d!M8-/A(2sD ;$t&' 6шe=s=! 6wI( VrJM"<"aLv$!^634kױ۰.P0vY4 ^ wE 8b Td cV'盐G >M6di񆆲Ǜ0&P.-"PXIlmB »zԗsv0|4]ngn GTmx0+E~r$TW0+mC6B[U͋mgiҊ, U?TdnI1@(*mSp@z]we$o*r."͌FTn8z:`.NCt#WW0j>.3s0*JA. @Ya}YYF,3;08]ջ?U P%CZ`鰠^ЪM\\WA˟f8<[Zv~]fIUee3750x LT@otdRj/K3i8I @/jw'@2JZw # x2z(g[yl $ t,1ǖ{p̍:22Jt9_Lyvڅjaw|A]AF0{)U|F7l7R:0jre1sA-"ro0CJf/hl5r/gcrw?sjY 9=^ n) iΫ_oѷ\?竳rŲSu#sO^lj9{+to!L{)c.gQ0d"jov<0) C5 RF;y ,0H& XMQIM"s (W` A2YA[Fe)A@=/,x8zw[åk+qY,c`JWg0?ra)5n x,5HSEә2G 2e3M s{y97O 4J4oSM7b.>JIX[QnÇrRsCdﶫEL0ZeC1gn%Kx%t5vL- ||ɧM?#sQȮӾgڟdX?fw=`y0(&1(DP$]?.Q !ϊĩqcJ}OR {y~AkLRtmU6<-MM޿n]qA [zZ&Kx< շWqV@Ijw*E$%e efTWճ:QJ5jQ׋2gebjj 5|Jw!zi{k.P+ybgc}huD,2/>e\ya(W2˲3BFzʆˏEՋpmfU\ARP#BcX[ 7Ӊ>+ ^jc֠}a 8~Xgk`k9j!~|۟Zc0gUK,Ug3sj}q̦KyG*}x02@4W_eZRB( %8& ꅔJsgL5vʅ=cFDHG:ozR! QAuV.-LSb'[O:H KcBm'>T}MX?l4=ބA6rɚ C>˖߆A$lԗsv0|4 ,$_d:[jc\svw{s-KڥOgzq_14%R"v7 [/LYLbRrb;)RVU Į%%GR)t,NZ\>4]ω)!$ӫ5SQ &FDIbMm u.q{&jc.g1p͎rgSad.IR5[ D8Vf@Z~Nz־F+-kO7,o~zODLY/oHozڛק:4P =N M-X>rC6I-6f8~r YYh}TXT)Ruh)#$bQ[|/?AlM̹ٔ&I̹Dz{ dq?u.>}=W>B%|(&9ϲ㓰\x)0ʜ:_+-U|>h -C:~H!?3 -gR7,BB [xP1QBQC>CRHf(2zE4Vfά$kt@hdf8 a-Xg,)r35/ܰE^b_`gNO!%-bLz/o9T]V=S5iɕL6K ccP'{X8g;l:לz٦ai?v>_uMϿ&.z6pVC<=oPJ1Od)wjAC0O-oeڔD! Kn J.QuԒM}/jjJ&<'xҁPKo{!|ڻZ% 4vfPStf;űK%64] TV(Uk7&U2Jւ}OU'9UfRUwv~|t.Dm:{)Uέ~vkb[Nީ*ǃ}mfp1sȎ|ɂ>&b9!WM ǐ ARJk<[鮤i؜ϩ}ҲpkY|C ZL%U#ҞiI^LpyP|x=N^ɳnw]v8ļ$&I̯`G5ԍ$ͿIQ4MV;3eҙ5fI,FI{~R! g [;$-lpwצtΩtTODQ`gj|NEbo"?U6a62%Jz$BS=Nֱ] JbHfGD ޳ʩ@uI 16W xr"\+3z Zqπq_Z?'xn?/C1??x~_jخ f= iky8VwTqJlhjY0Īnq;lljKr[؎$[e(hR̓!q,\UW4+ ͣo *EU!$bQ{:|ElM̹ٔ3ʜVOjP]N&?3LrŤA#ǬZB0evY֦譎Q܏Mb jmQ; F0`E~6!f ;@͐j]owd5"׋sJF]0seROc[D3"Έ( @A\9KE2⌋"?@\<[!x@Ǧf_z9G޻x\xYFvܠn~|GkHGL&ZX$OVPL@(hqG_]=?tb|gn;ZSUS$ch,Nq VWCl1wШI*BZ[(Q߄wB@GKsUH%1F D0@%ы٠I1tj-+xIkOtCOs4'_{k?^D'3ӓ*lqImvY4bΆEgkq+ECӪ9z[Ή.cKX}LVNgJ4 !VlahEӢ Y/ԑ}/~WcZ\C|ζuen\k`Su&g0ΖqgfWjVw8>IX-ZsmLJ0:PnQdd]jUQVC`d. pW|y+B]|]UPX&ʾJ3\shѦg5j9O[oJ맣PJ.*:+$ O 8(1G&3p Exv_ ĖۆnZ|8o]O%L VX(5^C H&/C1KuUeq+"jۻFa.m >d(mQ,91Jq@֌/J)^zE0*F@E9<ƅG2Wyȶ1, "EXs<]_//jrź+QDס\$&OTl(b[<>$4GT&&4Mb*K"$eXynKRpJRp.IYs"-rf.BkXHBSHUjő,b-1؍l` .DA\DVje6ʒt8a}b-wSAwuI+=cɒGъWRjib͢ o& Ěș\4SvĖc`)5t9rr/jD+ L%">0̜ }NA)R#F ǤQQ\}М?% ƌ*H'Rlůy>0]A:zzLj"Nff3\ҙPmlAD&>dAU _EY[鮮a7' a oRbPEf}r)eyDZB_ )xEg_kҶo3|*wdV)} ǐ|W:V $`Jv Gӣʷ'oqTtQs ާ Ax ͹[w?(haRjQP6W-"/2dQ,R@g>8S[vtRy^Վ5xK^ƒI m0Y6»eZ7]t5*$OjR!*;6eC-f:)Uק:?߼ϗ|'Lj77 PysT\A0Nmn+ǯ v)׃ AQkuKOH;+;<;ףt|XاnȺ=F/aN(EKf-DC;xp_}/Q9RrQTA(qZ Ipybh ݋/]) _kʉC%U!%U/Ѷdme͵R{"Kk'>^9>[Pz4Ըs\gWҦF1墵zb2A ^!tjEIq wxi " $UT$N$Br /G0`Cvj"Sf1u2c\ 4hRI8h!/V,R<$R<(nb΂wDðqmZg`YoI=M}}x\>!hr޲I -VLw0M9Z2PiLF,5jJiRZ o<Wi35 x<Fs<ߚ䱒̋+A0A#Š[j#(E&D 18 yZ< 'ϞB$(- 3y%R|QS(B-׷{7LuQhM6 AJQۖ]Ǯ&?Ȗ6xbtVþ o\K{Z[ڡTFi'D)Fg:Ɣ̨;! .ƷI R6ӔL0n֑"1!Z0TDvLtxVNEnK:8v.R|;| q\hܨ?Ma,!z /5\$k&AݐIPϊFt4}U 6K_KL"L¢;4bعR'J{W7o<0 3xL^9`k8+ʋL &d F \VADX# r`uYUݾ`9h>oLi6'/vO6o۽ܠ{I6$ڸcPo2͑b>ucGrB%ͿR\W=oÀZEM}+eC ͺl"~Bk~J*1*Jcw5}f}R6bXԌ‡h|q/u9 0< I;NME³/^(x0j/~d&$%Lz(x~MEgkR3Ή7ꦿ 'Hzҗ&EN|Ck< Egn5n&WЕjY5=7rEsmM"f QϩiO,? $Ҹ;-u+~LʳC#}\u7*q2h"^wVUZ@`]| 77 .N4Jӭ-T,99+~ws}4aVJ~: 2PȬ@?wR|8 s9LAM %? {_PlyVwAɕe.ݞq%.N${^{{^y;I^⯯_UؗcM;i.y#yDqޡYo6Z[&Tvz ['5A=&Q8D#5+x`svsD}[{;ǫ7!a}um3TI).!UJ``JXIsIVE0NV?{詿@ϖZzgkd)c#R4o%釸Da#lPX*L$H ,(Aa^+B/Q^j6,H2s$Y [ ^VqrBV''c($\(=>,SA<[|z,+0 f% 1oSB f&ʨ&9۽kLL|RqXeo~hX6eXh'ҭ>(pEGs F0z"1kc:x}@^E">itL*"+_`u)͸J0B'L7cUb.B/Tl4o3F +1JڨB4L+$ 9E*tLS1'k7Y Ȓ@a{p~ ([H(J+FRh-<M+.#mmm ݠ+R b5L?D/݄[?3]&_[2oQ"=q~lwb$)LMת/0nvl8cQ%{pWޏhX#UF' 䃺qMGwEfS 2#VWae#h6ȔÇ.ůɾv蛖ZzO&4MFbkͫp{;h9MkNYmMUɹ6kX͛Az"MKcN/.^ Gۜ&]c zaPc-WYuedR!8U]Iu%ՕT- *Xگ(@R$B48( 2>86. 7 Ř5K(:hVnN{S(A; b$h%{gqoR0#4'(xAcn٤I@i< sボ{,Ƨа/×U_K}ş}0*WJyWbaZ1:Į^(%^ +Y6`2//tI0Ev7loDl4sSV~cIw(K7/6̾h1TH --% {L\\V6=ʷ虵Wr=ZE.X+\GePK#F(ȥ U' 2q߯2lˈ6f=3Y5u!۵KV9;c)jƤF8)`j/|KtTY Nԁy[aZ{'ޥgߨg F|gbhap6kA]Z'o@4 kR(YI;Ͻ yق#yRVJbPRT OBLS gGZ{u+iZ|"^Dh 72")%+%؂8 UH!qRCR$b(SqZwvDq0dz#槃5dKhٽeZ9SPf/׳ GXM4(:wLqf/Ö 0sldNcF1ZleF͝QFGR*:dZYKo3G)ݳ寗FM-v|*R)!e] }z_;£Uʯ?8hɎ; D{ta_dD4Gɕ䁗t}G6N*UmW66\dqhtіkn[ŻUH$Jͮveätt^'ŻQJXTowdBM5|3zJś/q!H2k$Ԏ( !@ /9X:cvܞCg&|"  g*9QWب 0P$ZQOmVRǬj"C 3VF52;xN(@q1Yæ-K7b}-blô" :A :z6Z-+꫷>j屌=f'+yɶngRI9PuՁrVђrlMư7U_uR6:X0)]vQ]vN& @\ PDC*,`jsRZYsz_N8.=&lx=|6;Q]JُO񀰺HφD ><-?v%< ? 7ÓvJHw3P)%/Vޅ%Hx!Cf+#~=BwLD.uRu.*Tb$ bC֖IJSj..A-PߗJ-1%## yͰ6Rm/,=͋)5`.-|ι^/=P_?tH,pU88z!d]e[ES# \Ssii4xYo\+@QDG7$D`UtdkYu qlYY4-hPAVG'Ȋ {1rB>8*(8XφM@=[՟/@ւz.AÏK'Kڢ^zZ5xצ$kΩx+ISje*T*P`X{"\k(̥sRA&\fB;ERPkgO~G=3bNF49WBbQ8st/cX뷗ǒ0L&`ilAE\*mJNMzĵG_<IƉPRS*Lmѹ]=a"r_\Ri%2`t!F) КȐ!"XZwbW.BQ1l4B*MQ5_&!ՎkAзx;wPOY֧|~sN[6MHqO@duh7_Wggj jRRACm!q -ňR$!@TGSloI~lM.%Jl`r5}Hff+\ҕPxm 6jz-qQN{fyoZ ƲpjtL:.IekIJJk^rTzH- \2C%"E׫J:vD:vUso&3SM/з m|peqdZO \\\/trf)^Ckz$^/%/ΗwL)||{}zf]y%]Q(^&ͷ7k-'伾?8N_˧B?U6uzj] O]:Yvzym"čm*&_hY}^P_TLS)8Z$x\ QvoaǫDaD59`)#eq &-)&ߓS 3v."^w6FRF=  w)lTTSB.\lValK^z܅;o_V*fD\Vg2S45ƇO 51 \;SpEUWYZ[t&fq5 ੡EMŚ;nDU9:V.OTKF"ẠCeæ0#?;h[)o(Fx݇hkۡg=L3ңwsn2^qb,Ɛz~J>, @ix7egZfy {^}!7ˆJJ>ND,5E6Q F9-zg]_U&[[g9[wemHTew\?=莝혞u8plSl;o((PlAX 2_p+Xg&H8R2\iI2 $jJ'U/&{遼7B2MG :J[v@lDK%xݹc1ξr5PG'-0'x o`Lk`"ɡ*&Yjj|3N ^:Sk%)j'hS19J\&G<Ҟhto ‘trOIT\@3 覆9m^ƣX5a2F]VwZh>GZYыsy%ɬ_=ޮ^qn^rnw-T[-;t i@\"N9LG-/Vx}fGgc7Pjt"zc,=Qί=: s3ks)C$!ߣOqKi_ڊ`ɫ|p)OdzAe;'ocd?eWӋI| q}/R;0*:}4YxYޤqʙȹ‰~o[~dAC#E؉. .>v }~*evf{tU D.X |X¦3A }N,8 5e =M6Y{ky)x`wu B\|&D|{m<7gًn22| 5}y]bU?Q5vgZ6.r7hݛ_FGϣL]7~|`dMM߹ZަyXۍZc-Kʣ`>z]\j7ڏƣLFU/R9\yy6g#sJ+%mu //bNy5 ĐF}lRcWi(/MʝBxZ7]Oflntul ntHR g^y`NE}0PRtJhQFGυV()5hXC#w&@()Q P)1ZF^ ̗7[g##K8_[Klmd?MOd7iCxyF&)lۤiBi" 3ڈr9wrJA|*ZUf l e"h$A&)6:6MOD ]/i诠~bOqjOմoitܗb|GeWv*29-n`ȓ F] OݓRLV^c)Du'b20JiH+FՙiP.fZs$;UHC┡^{▌cƑN8AR-lB%UZ3#gf,Ub.u<'J7ȸ'CGyhăF\i bԆ{ш%0E5c %jDD:F&bH44 685I*ʃqy QFI{(nc( kb׈?Djzq:O}OwŸd[h EsЋ2vRBD!Q!pFJb& $0}чŸcK}(HY}wo}G[9#>ձuh7p;x?>#UW?qNU< |*Gx=A9'/ ޕs_ CgRl'N &݆2# T"=&b4y&QfK U&)j(`s*E `<$2bFXp\ h\zV||&y{g ,7dVM:0Ob08+n+,vYÓ6#M\PkR Z'kbH4Q'Dlϣ߻zأz({onёk"Ҕ 63E,E<:1D}rZU.ߓdJI |㒂*q+f(-x0$" K1eoh$+OFjUF/p(~PUhBsDHPy0Ssxj:G-mVZlVʭazz9hdKg` P"P'pA^P-<‚ x^EAHrOqr@~O!kt&W_|2pDׁ\ Cw_xp>7ge Ñ(ּK\c5IMpĕdNJ/xGO}R #̈gSgB=:H0R)= +`Y"k 8T]9FhsV[;Bf9S&mǬ:E>ng^&mխNSr a5B$$"@Wp3ᢣ[>j mӴCYNY |1@XOUMuO\S0* .tuy#tM.+AmD8|gmYnH[)adhuO+i vzS *jL.ūD.jy;U7efth̆h!x)K¯E5ڦOzwhM~IKʎiP$ lcL75JD:ɝ)EbgdOLνךCiѷ6;{v ?NfkRi>ΦnogW#hA(%Kc%;I@vLJVȝPYv2/ߌ =(bxTB\YT(*(e"*+7Y&fЪR ^*%ڛKP%!J 40IL?{OƑ_iS>ر3d )Id;}U$ȖD- 1ɮzUΪwqR"!@wK9 Q u}8[[oNKvxnG7SfRSw-I<-Ljt2M9O;}G(4F e5Qky' >Jc)V=' +q̜㋧z c jEt8hLc^y2cT*Ģ baDzΨ 3-4х␔M@=H)ڲBô#} &nt xPN2^8hC` #SOR@ tʐ #wLGCHwIUBv j I*a)3 CEagM#8s%U9>U9o>4r>n#7GU% >| Fq<&ӣ A0MnGѸx9αTVF+/55$aHdgO_g㫫'xT 6ë _O: sݜϦM-dΧ/U乳T5L[ :?} |܀ h[L 3ʉkT~yŪ|^Ea|cmH8ɛ|{U|K%Wo5k  kH^r tǭ%H$oζ.rZװhuַbNXt0߮6Z͵&>n*k 5WLUNv99,+ g+Qph%.ҥ$0Ir-=*^]<>{fF%ZNz~C`k]^[`ΉuOtH= r*]OC1iMfQ;˕{תu]x!͒jQ_E o NW"'>K?yz!ׂ! qaAS;ligWcz9 2]wYnYkN+J~?igY{QmӭWᔱ)C\Z0RLI@6(,&`X4 JeX=% t񼜆աS 7v-.;..G&ƒU_p[Rʓw7A#r Z"yUDžʒ*k@dS's6Wqy潠j Y2,F̕ӂO\ ~@x_%[,bYE1, M4QMpQ[O sD:δѲ,HZ鄒1\*1"V@iРZ&D4U3ˌ3ϙ Nuf Cơ:e F%=pPY^Q*q! UFbŰr`1ą)Mp N#Xt))3=QpOgGp+Q5GʍULKD&c&v )9!h [MhG-hP4TJF9KKs>9*"it8[Y?^ j e9:p) A [p< !A0݅$mvKhtԎNMI'.Iͨό$A^aERDy8kƼT g<(tD*8 Vg {;>V| 9GH5 -blqF`xߴ=q}kg8}+a޷Zoրr)a%FIFO<8f^bR%pӁz.@[ (zjD$ȂP&_g(+񉎉oаA's*1SU1"j8v؉gl̀kfFbĢ'cn|P3 ٕ:d!Ci 금A27YN~eij"| 3yi@vxRGTRJmw#Jm|,d_f:_; f\^DZKj=t?.҅v5*Cz )ނ#A2 <@[@UU|hjlTާ -|no/Bi aPTE3haJC֒^۰ș »zW{r48jQeYv:BWx?l mno;\t~'w/T~5j6"oڬW ^z.MKSBl0P\L&;ƦףTP34b>QWڽf֊-+#giX540~Kbz9ȉjo)ަ '[{hԼ|.4msvc05/Ξa ~SY]e,:uy/b)o 10 .fdrH8+kU<|ˏp۸b H(򥖜N)bxЄ;R3B:?ɆQYՁ|JgKy0E(Ė{GDF{ Ta%tHՁ>P'u@NPD@APAha$ql:5KHF' cSeqc4EĖPƽuH ÌМ_6v6e2_~%fG+ri<^S m]F'ЗIr9U:j*1idwtyx+L`Q(%]\J E6!-"Kd`zEqY~,Rda(>b8*JkEY"Hx(Zo"Cɹ( >nc?uRhNf+*7\tI?9GŴk%R BYl`' `VZz cYku/|Wm.<(RZcVpMTQ*K? "(X6o:0n1y (ށm@4:&̰4N+!H&6[C1#Tr(c__%mR ֒S*DBK[H@i a#dcZbO%J%xp> #^ieH qJ3,$3`L=0 `qU)5<>fI.\.Yˢxn܊[`H/`߯7] [eN$B:p/Ur˿otaϒ<6X$̙:yZUmt~(&-̿S+ _ݷ58Js&B l~$q^a{. U3&dBm+6c@4}K-&i1t Gwcj{]k5JSOVY/=J-HV.ɇ'MmY=jŕzƓNtz&'߾uUN M R$՝Yzݩ,_&*Ph3o ?#T`Q~9jىƅBvw7CȇO'辍jI|1qٴ%]b{~׺oY.. F!,0@z2]'~]Rx9z||z֯^ l+2Rq!^˹Ӌq|Pi 1kP#E =p .쬨:]I֊["KHp[>hZ#]F 䋺qgmWwDfS`TqȠxةuV.-lSb'[O:H)i5Y'dJ3x!w,ޚtaFR!lѽ7?~]D2DBB RbJƒO>il n3.̺Q)1M#&7u\E$"OPe''#b;*]zݷy*l}ta:bA네1N 9ƥjK vDtT6Y& ꉷ⭵C]E=|lt!9_538ݘ}tE]?z~7d02XzgjI97&[g$H/?yU&'~_~0_#,Ea< G1&S?{AK$\h{AIE&YPŒr'a9j[<[8 ZQl,@P3HP%8:ɭA͊;e&#QHY5`b0k5f,`2b=6@!5[!-qg6r:Zn~ɗ*%L4HEɗť3vOrC gq%UeݙPrمw=U} /pw%r+itAZ^Iojzd.&zB Kh]?.ۻMwthnͪChYB˲n{=5ys ֡祖!nw܍ σ-,鎎e-8eF%#iϛv|j rcjvCe/Ynm49شg$r|.A*@e'} R^#aOFOFFODiè$ԕ~ҽw1FjḮQRv49ŵ^VCo2wN_|<_,IQFP>2SJʩcD-o0ysg lh*ZaYSX{ϊ%3isM!cr}:zbZxS꺌+w,J@{TK"h,FAVL.amuP/x. Ϻ^xɥ;|֣Sݧ}*qJܧ}*qJܧ}*qJX44oXo``rWWQer &N PgxraT?*i;5*<-'W .fޖF\p6 YiT*ja4&J]ew7Ti0I+z73|* u_jVt ` %/5Ө$Nj+~sW>&s@K܋_U~G=qU2jY#7qMaZ\{8ɱ7,sjV`oKŧ7rrD"^|_ 0}u^y+zs0SYFض3׮11yaN,8gn_=&ْ޻Z1|])B} pZx).*|#u^w:_q"53 Xblt?3DP~?t8QXulNB_'I$u: }NB_'{,Fuz˱I$u: }NB_'I$u: Of+B8ZwAɕ.!֮r3Te: *CA ƙARLzӽO1z#d6+5v 4.D-Z0g b>( 4RC!fH Y@SJx0lf49uBv ͱ֑1:Lr'\7}NU|3ci<:s9)_$طot|3q_0sM9@e$"_J 3I$Ҋ]T]z(r 7jjgKa+e6flz4-O1K;i4ֆY<u+LeTvih}5xnR6qRԋ=ނɷrs_$Wz~h[zm>~oG˿nk|VAA! [3¶ƃün;gՂ\SBRK ںi A9@uL0SQ?*dRqXt&.3`*C be,!]Zn5@,NcnjLX]esY&&VQR`*BC3@cĜE Nu*z}YWw}OZxGөr }T^3m(C'zXqMC=?xT'p{J9) 2:M&q\5gh֨R(bfFFF)IzcdLD[&gsL(QmGstt82A[R.k%K಑l4t_Hh:^_kl]#ࣇO1)R9a\[ROcɤZ"S*X&A–a(F2ݣί3SC܋cNY9ANwvzSS\]WG8-'.ymY>?'wk"1ʴHh. EDBBIf!&G쥍pU3 %Z*R>RL@qoW>.XP1g&ʭ[3*ta6W̺Pp&!{3_I CZ+q6LM995:J1+Tt!)'N؇='쑫cS{TgXRvwqӭЏv\ ޶_S˿N(W[?C_y |쵲2aDD&p TQ͍Qm`\:vF|Yq:~oKΖ6@ۛ0OeHM„ n_$+`VL]ȻT4bZJg7Fo{;YX2ׯr*;~հCQ1?|T9 >Ak%j{qWkȎK_ӂ*] _A_nR[2E%` mw1Y|cmB@e+^wVJuΫԥpͫZq`*`meZ;.vL7 VuPb}:96Ocxwo%0u ?m*6Y"ۼ.&(gp9 !t|#w<|sn4 }ⲳ?j>)XJ ^wEU?{^[iqfI(@L%ZZt$ŷ)>z-ss{V7_ 0}u^KvE$?ه9^%&f#glZd'Pݟ2tW7lTR겦=]tMfF;/?@6g-+5"ƌ8P""2gKu Q,M"罔8 a M)Ú0jj2RʃDc`P/5^#@6r:f"݇ $)+AKWĔi(" \ 1UY ZtLu!)6K ʹ mCki0SݦL}kP>־x9E<؜_(q9>{vN&:KT  n{DЧ:9_%V,XcixѪL|"-5Gh쨲ދG:L~nfƠ{K93CKreD,x MD/%QNqݦYz'&g|Q' |z3\KAzg֯ 9X]m66}6?N??^|cR, !(N+xj9RPYm;'C5L.^JFm%1&+(M *\~KI;U TĕuaP@8()iʝQ)b%+]/uFI=xOKY|FWSDlǧ{tXMvj<`KZgtUΊM_QF ܸdqkp$O*X ~yW<(,UieTD{^X@"tBp$"8G;l RBPdct %(Ddn0rMnSO9{CX4:ř)! (8!\"bFhUT9%3(ʽJXWEt2DVkn6 oAdGx %it PNSxj\24HDmɁ' `N3F0lM:IH/ Yz5Oi*{Uݾ9. ű!Q"[~epC`CPe:x[5I~{IQ~I/v:#y3DX .haenq i#tt]"⸚f-\pyOԥ7ukw1(p|N;YL8TLf/TB%L<bcEQNX9]g튜 4}Jg:Ie4}GI7IVQ*gG^($FPkDV6%A0Ԅ%3óIp/gJs\Zgh‚6*evh)%ce:!m g{$Yg EF(1$5g)ǹdbF)9;ȹUr F͡{d:m e-2}LL f&hyOx%.? y/]KFQAO?1#HP`ZKg) D?Yb) BF"$b2l:ZbIY S$d 9! H{A N %Sjz(^oW('D[{-s 7#;EɜP1I30 ]`Z!?腕VMA2C ȃTVa^" XX_g,r_S `]2TreΤDSr].8"Qţ;񘃱ҏϿtt!ݱt׸z9͆A#lRVMQϿFQ$r, 6y`*;>t#D)9.qS%ؐhSLH"c~Q:4>ٻ+hdu.eZn  [h.jwAx'n"θVtmլ\C'gqqfw)\ナ/gqT\`l sʅ+(VA {ݨM}=7}Z`WSn'}s+F߆IǑs6%`Vۆ*GKUƶGZOFsUi=TkfĤL*& CJd@\t&ІxjT_qWu̞9ڿ8fGq̦8 +QE}LF%6DOGV ep 7A[݆4^1HkEj$J1.28O*Ġxx%3rS/ nZzk{xKׯB^H[Yϋ'|ζY4~)pux ch-9VЎb ]hv2//ש&Xh{  +Jo@VdCqh2GFg#\8MaD9 ^]I9sw !n;ruM=;U[5VϺU:5=^(d8%(2.ʲH8WFRơdK""܆, BSDe^0ovQ)Oo}']fiEpkGiULx”\] 4Ƹ D@OӪtFN![G\j#7xVp DYǫH/#R|AQk+ 7RF&bDdZ1ied4$=h%߭AUU& y*iAY(s$}>үs)QYGZ)(B|sBXlr< mJ*H`4%"ZqcPy*2Et rޒrrjK&&+]DEDSh6 (.Q67?:=?iS u̓R&1hSDYe8 Vh!{AVs9\ dgR=hnT(Rx2Y8?HjY[XU=?T"OA:U0~3-bY^ =<[0Ш i.Q7ʊkȨfůB\-Z;>q@CFdze=?6N8ZFbWLujh!32y[c6Q~bISGWCkObqge0gT7?bOdyX GY" fJiIm(/Xb8Q ͵I}̩^k26* {xXl~*QeʛZ}_W}DM>]x jvI7I\XM/?鯗 "k?'_D0Za`gT9ɅGߟaԢGSjgFR,X-EEKcSU+9 9p?С7uhכMP=Q h+MnЦjdS{y>,og ^[AXF-PT](u͕]-Ϋxd4!>xח*/nmQMw]X|EolNd|zyR4_uG/(F*,ԳQW\)**Se^JfI/ <u]!;ǂWWJ%zu ՕR`@?#u|>`&W=u5{&5+- jimH#X=UܰrlKqd׿2Kvn>n}bj >mɐNsgHbٲ8$3ݧn{nwsM .dhܡjguGz_i :Q'7~|1#`GU2}iFKϾf^ M@D{DWlb ]1U%PBW/R:]voᦽ+uQ) tQ >`sT Dv>huG%QzQ=>+tבt2{DWl֛,DWs+FKo䓻rwM\6;&rg}1Lf{W2.n7}p?69^!Dܬ? m;#x'hف[s|k>opcSƸ*;;zn)zcx6y~}+6fk/i{i*@|[%>b٠|F}"jww̻23 g51}Ig?fg07W.GjǻJPfkkmN7S/Ub UgKUkt?]n!,m=Wg4:̎/7qms4\Ⱥ*h\"URvִY9K%H71>LɅtVT.ҫ֪ܔ8B.ڬ3z4j>c\gkg,鴺?2hCwD*S dK6hFKah;ΣdsrŇb,Ztm)Y@o޼=1pJk: kg 끒rWj9вb>;Lf 0kD3$p'-Ռ bzL-x L uGfYRo3M- m*F)>1F(SE4P&]PafC|R֎Ƭr61b5)x~QPV% "OpO+ЈK*#IM>_m!Mu-NQ^dc)P)}z8oB<A޴\s9 Sf0j$7.(]cpOp_c;S\knws=i:wK9jsi g16c@FUymBeLHnIKxojkT 9>$]^Y hjA 1Ȏ ўHudV@N`Nڨ/ ~ E*Us`#L$X EA)1أԡti~G>83 AL7Xu]E b:±e6P6Z?+.t-a6tGV1- YZec`T" Rg'XP. )-ٰQF *ҕOWm {oȦ9.H1Ű$؁~:SYe; R5rwktĬeBGkl}'I%G1jTPTZDJg=l4[-P^f$߈&#̂6B@F=5(!Ȯd܁E7BnTzC{- ȸCLAAf󭋠@zIB€(!2] 4HAjTy>:LAZ 3(RAUvNZ'$̿2.`xÀ}nl:@G6~*MT$];-6*d`fPDd Ù.̓1ZyO"}dH֨Vx$ ;< c]`QՅE,TG7 Xżs jJz%B4D~ ]{ #Aˈ|uC.ͷAyh"! 5y]%@_!80f;@]Ii(ʠv0F/%\)䌊 Y)֎ a<Qy@1 bu5;XՌ.#8Xx;:B5~ 3 ~(يG+~n<bDuN M~#}CE7 F-)~Ca EHKCH4pY/xp`\Sclci0Ih j hN\76xk+fnQ4 Chփ*H]6 |t~g҃d &S@rZx tmKzg> G_Zi.!@6(=<VAPkMyQ6.5("eiq#0EY45PzB%aPJ↑$5󛎷4JCy1[.ՂʠҘ"&b9R Y 5LBER!6SJ,tR@ǵ1[ Y\*y׹]OvZ>XQ>J+e`j?&lz2h>f)n9=+La. 4O4 -{UcvTϧzz*A W&v}04*xOnVnk>iַKжg8>.~|o'ǿl}ۋ~g\oٚ;k!Uv} #7ݽ{Jӏ+(jzʬ׏݊/v+Fs[%&ح^hJVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[=bwtN{d2[YV@s~ح^ E[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%vOh(\>٭lDj7v+5t+QV/n4حn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVbحn%v+[JVb+[.[snމ%ح>Ӎ%`I:w~el~ċ/Ϗ _Wæ,.zE}Kn}NC\⧨PjF?.iM]Wnx2}}-|ٯnTޛ/oWLB~_=n'r9nWr8N~<ԑa$1t\u8y7qBt}|uoOVZTxq~蒃xGe\Y/ ^՛b+"CXm2=+.}+FnRIWK7:R[zwkm(O1{zM-ȦG7f2RCeo6bu~zhCocpq֓+E&y嵎Qo?ݑG4+F Mhݳf(= MDNiu {)oOx'¾yƱI`ʫ(xrzqyTov?t>9vUqvME?m0MW?60EWF%'5̧O=`2zK v ~~%SI;ogǖ>Q}so'bu]p5K;De聯o:H-8]Kf"ڔh7y ' w&Ȳ !&km('?lѪ! OY㱌FYzXL]Ә4W3/c%SBOeSF!z8DN}ZǀwmZ_ lX|? 6{7@ n{6Ag,{aّlɞX3#P9?g \ۚP@XgBr7~:Cl=C (F-o:3oMOOwˏNW6tu\KWGPZ0uE+}Z "V5-thl:]Qe]=AbVH%ZDWX1BM+DiyGWO8nd[DWش$ "گ<O֢5tp4%S+EEtl ]!c2ÅIҕ3b4DڴmT;S툒"]INec@e%8bgL°&@[zurh~Wfiky[hj%Di'iiU{T%mϜ_{At(;᱇50tuZs=CbMO"T++[CWX{i{vt(tŨL0#5tpm ]!Zxu()ҕ!b  `q9~>l49B`~VP*c\\z}b.O1s3}}bXo7_m䶭=ǿ #0:6c{nR8iQ.|:+ޥ^Ίf9l4IAEWJx  T P_V*+:Zj2l*}Z[F9nyEFT28nUhUƮVŊ&)5Zi* -K6!A߫cł964xuކ]V--tn:i7Sk-oTޞ0+l[ځjtaOQ h"e-thi:]!JM=ELlkq`!ڣn{m4'HW)ci [fZCWf0hl:]!JS+ͬm B$QWXZRtsWO$7D5M,9cSpˆ&Vo7欣JG::9mYa"mYa Cݹ'>1T9갖Cҍ7FnYc f7WKu5|]Wurn,)s⊩$@I2Fcz脲<`n8hB+ٞqZ:h57MA\(ަ`_]!\MBW֨^p-W,NWtuZE{xʦQ+}2 mn]V?=OnVaxfOzn҃=mO{Ww|n}\J,,<ӆ'IGa+#1ˠ-N5q<g=)CO ay4߽߾?[zrUh{_x察NLN*>o 'Zhj9dڦB SfDNq&DM'ir[2[Izp*꓂]>0U?) Zx~r|pA95.kMߌh5qʦp͏7KVh(~zlT= wVPe˞ :*'^NwgՕy?V(bT܆qKaI* D0G V1L^|:.m.ͪ>V o>6=WXo]EjŧrvG$ScUu5Us}%TϦ~ eyҦ l, ~9:ɆݝWjY~#;loݝu5m;uFoYQyd45@ Dlv)(GG$tQkHѬ5Vl- I t:jj$( 6MIe =mf"u>U3+㜍M΄;I *Z3\&V8аXAh,g:=b xGmr_5{~)v z?ܞ$)$2ZIUcDx̮k'_F/=,LdT `QBO&)v:`,_ɟFRcG`W*3 l<LY-8F8;L1`e:ΊسK4b/l^4+.ݠ%J:U˶hP?)3FqFJ)j6& Mek If.e,edJbIIY/BRj o!)A!) IٮB,*a:1YDius'YN2G.y,k썱.lmt@4̫`Cm`䘒1TT'F(\&.ś֩[zM2ۤW }pi]J19nzgˤbb }ԖhIus M=X!8L,x 0%<Fr:(c~S@Fh6) 6 RFITHIm| 3z8=! ] >Eb9][y5 2wz&W&ɘKYC8fSF/E½j:kᩄi)c_nD.sLuӁGԠxӁFYQg?9?HA#يPVNJf /#γd CpЕvL ]@2i:'x$fNc9DIuG'%V6K [*ZZJb$$7y,utБhBhorq))H:tɂ,BFcBfkaZi3y*68w˩, $vւSve{vxB}__/*zi2}6?EDCc{'\Fڡݸ_B41zP)nv vHڥrXn06b̼ z,ȵ*s$wN•CiN|]{{>Wi]_y1V]Ί?Vs5l|; ij[]QmrA0J%*Мxx˜)g 2(mH %Q^qo odyY̩1̣*e 8w%gx&lb֬ktĻP'NIXOcE*B>K8[fg}k]%XqI)It 'Q"H0wOxX@DSch0#C TD]#%LIK ~$䐐4$O"s"PH# $ $$8'ZS)R e' uOnbA ,)|8;n{nw@۳AoߍR /K\hc~ģr3*Ɲl$g#TtLVRa%tS&؉1Ϸ<^ˤ ׼qE >3ID>DK91O-̃r݅xGzd( +r:pg&j\Ĝ3$#+Iuy}#(^;|v_Qp@1tDQ# $?dMzG^&ھ?tLo5qQjCysLȘ1A(,ޤĭ fXG}ϨA?C+?.h3Ʀ&f'6·hUt,.B\SQ g}gK q1/N #"[pr.8`U AuƳ|'T"O|.reF WSOTgU4t6T}/ kk]=0 oU=BUO2]q\d;0k\\?rۃ0*^ rӼ@>o!e%~Sl!gʃmK},YX4(,Lo,8-jWq^wN7ZR7>X2 ΁ sm|ސb&?8hΊ?ެ}>o*tt./p1a*mmǼ* 4^ fYNHN}]5ѾUv:ظ.I+B~m'|ɢ,fq{- 1"KdbwKlaevH"uSl6*~YcK]%q}8A2J35[Go6wii+Ҟi⦟Kw{ B%Hz2"XmS:z6xK3q״ 7NF u.ݛ&[ nm7)KjKmll*2q;mk,̴&8(4z.ޭ9-wgjASXLژ V;dk%9Byn HHW zPK4at"m48I_I7}&& ͬ~mJ\wzn{r,vɽMd8ĝ~ N[ss+/!֠ٷRW7m#slSoX1-No!kND'ipTg6ܳw?f_imK|<ƭ-һh̺v94)yWSl8mfm3GU:vŰ>(3A<|8OR {p⥊K aR_JɇswϛJ5\1}f۟Ǔ?88Nm{U#ZMS[p~f/ EJ BFuRjNkU\/ i@xW7YQL~.x|Z,2hWJ`Vp\Y] 84ʪpycGM*.lVr">}Tum/ŕ;^-5i3R3ߏ&.|>f41|~->>? N UKǡ}W~`gfW< !O_7#nH!$9k6 >[IkyRg&Qf={=QŪZ|Ȟۆٔx.c.$t2(MyoL^لu6'i&[|Hu[KV7YYqYڭ|;0*QIy/.L(}KF7ҝnH_NJL^Qݿ׮3-tID2wNio..>^P>VVl:Q hd;&g"TxUx*`Er= U{dX{dY@6&'4g2xN{&nP0!H 20/ 2O4"OZeoutTˋճΩ-Nj 1#X!Z+v"1)=Ib5.`d@T# jͲZT~K:#[U: Rz^jΜoM4͟ZFjP0ω4O֎t6 m.se3|M'ϻO3 we@ bf)Ou0\ ; ϏGjvwpZߦߜOFF'R7 q0fr|>J;3#<`_--!nZ}|f \ |Wi΄Oɪo=o^eyЃf%D*ȥŬ4$W6[eh3qPl2"+AMP҉~Ȃ{ZwIgb8j{U䱉T|o`lDt0ZXŕ#@"ɭ= )dA o'ے}oSo1Z,xEZ|hb,_ovmhƵ2 6qxTŋQ40rRBec0ϞVʀLʞ"vWߋ3"VE C4Fg23JJ#"!J<0n(gC)C SP>/ܐ!?ڼVTl8%R}'!Κ2[g'J} ^sn(H,L2kFg%F"i:AFS6 Q\eY,'cwIdUplA`N# Z:X;kf gx|WE{7Z)jfW򉘥t$|H5Uo0ʂa+mW,31G3G]1G: D&z$;O ̴Ĥ9@SiRr)i2-ab\$.2%"9r Jg 19[2F7óesE&1rwzzdFmI-*ܐ ]vPu o_zmenڦMk:u&_pAC.okU,SK)ztC2ӯͺYfvn_QU͕f~[ԼPr|(ou}7uy9OLcWjO7TrkgqZkΧgE:r8qsvsqsDlgV*mCY8rUN**U^TH.zޛ>Y+s:A*RF9o5rsg1CZ,7gX? 1j6Zgm7 P8ɏgtfqq#k'fJQJ䴓;6EZWY$Qd(>:ՃZtlQ7F~PWV{*^i 2!1 NtT5*([/Lr*FJr+o b)oRo3ܜX>I7pKT!:eF SA1 I1Ld,ZIk{M63 ջ[פGJX6G?ߖS,n[Dy[={"'@kѴɠR KjBq2><VEDTKz{^{5j(X`>9S38rAX/qNd :kJ}RN:=[Jq!rAZ"Jrc3p]sT,p-",7Wm]G9VICWK5Ӫ{ˣӓtLhZ_e>Y=|" \(c2ㆵ}s2s10W!PQPt}LKz"(ԩX \r$nxm(Q.=O>e02!8lާ {r{1|=z>xt|*E΂(3ĢJI,{BQ;ՈرF^#q-s% ]I<ZRKy= Yrlt4q-rx`׉#Uo㷓bh~^;:a&R)w$9PS<qݨp`&Ғ:L /Ԩ[_tg )$"N9;"t &0{\{ |}MfpX%$"IQ\( :6$1&uɲv[{6 }w7^ 欙wsӷ4J>9ut8fҐ|[iذ{9`mv{W&ye>DB N 7O]\=#|m,f#L} }Q( ٱ'S ;E>{9"OgUo5y!3xqVPIح(Qfl$0!h B:("׶BҎV?%G`7] 6q]B)jٝCFR ||øe02!{m%P:ZB>)(ٯCd$WjCj6{N7jko/N>-A~\5x}մk)'-³qVquz*OiSO8&3[)x=cCS,S|%FdIOF5Y^l+j'=h~9;?_vUx_wA^"?.@Z1vr]< l: :OOMa;wUQ{) 9 >oͧ| `Gsf^:SuSyxێu8_vzaVWƚϪu7& Φz_5jj{}˕pvUz=ꔵx9bMw>05ww>9M~ad!w4ß}YT_\~EMjx=i:_i[Xo`=ol.p]XYC<,u7Dw={4?UWENƖF%n/8^J0LgOiYuυlb@.dg_fX,e@ioK,*UO/.62\nhv{v611ҕ__c= *W-GݭwǿǤ݌w9c]g 5q&45>h/OOqoiRY -cY[tSכWeww kgH1Qqt"lgĄVGr)lK|` A^5xgOkvp>t}_O IY!;iTQ$i<A U 帛˫jmQ[>ptcpXunJugqW3F+ V=UU\w0UQYZ%WU)*M^P*$kqIRGW)ml$r  P^d ,g^@3hUd\[3ө <3:%ba؀YgIɂBReP(R<ٶ 6g-1L e3qHRʪ $9HEi}㘹8wC(Tia7Ǝה5ߚy|kxՌyIVsn6Ugvi5]_'Dx zA(&*[|ƋD uIT3&rYK4^ $!jMF*RJJ!ۚ (*μ(^[!!Y3 ʰ0)%MTZ+.g Y O5gG;ꏯV(jwk +xg01&s ^y0O"dT:(ˮM Z ;3?; ("LN:m-ZSr2!&LO^x)`v&S.G[ V3W+^q'] )#R)NMMc&p*$ AbRz/"9GN<-A]=HXŮha(D'#'>BfPH3G;vƺ8;8h!:KL^٫厡'!]?zS喯K7zGVL8&6v _m%,1uT@xݹs'Hdʆ:)) Oa))0RR6T1:eR !V{T%cIu*G"u,9jb;BɃQXmkDZҖ1IQR v$s~j3qSռ6ZTf7_KȰS^r*}𶐊ā Z)'^>NKs@-bHqф)}R)4Z%Jǂ Zc6)+hvC$K"uLJC| R[ -[T0 2X12/WrD02KkG4;|5 /nPWe{VPGBfF)-pWº[qEI8l-='_0U; hXR>خzq\dcʨ0޲b٠ Ԝ&iSD0Y_TUvk[G{~ȱnZ=簵>D&o$t3IOV{|:ؗ* )-t2`+yJԑ-``P64S`7lt2K#lل1/ٽ,^F!KǽZI6XKcSE(Ca$)2(SHS*ѥqj8q+w|UdznoS-Bu Uk|,'^ԐnM^o81<* ekު{btw~槹@;(t‘ T xWQsі. /Re?:qdq,98 GȦU6Α 4+S$Z%v^CDF]2}4_C9ޯK۵/ 3*8` Xq* rxe l(@}=!hn}] ^}^?Ԙ_P-C6i$yE! $zt33eRx:Gٶ8K@$t#Zχ52/ՆrcFŋィ$ EcA儮yb&1IQBb0dgAMwhgGP)۷t<.V$g+%/3l!dZ0@s{4/Zƺ8Z2J- k Z\wlD(볶F DugaQãN97R줱d P%yAB-hHm !e'x^[%>FRD?fQaqZ&GGAJ+7!3QvB!ޝ 6g 8SZ=.-,ZדY$Ÿf0]rcwA7vc^S֘&դdٽUK)EJ*%VEVdD-Z,Ӽ5IhޞWʄ5h^ago?0_.99IRVM{7ee]^z!phlHϮv?٫)kA0" j^%S^dZz3퐞Zi~mc(O9C~ Bg!kV׿G4'qpꞛ`Vm"K%y8*w-c ?$7.@ŐNQ[Q1~IH޵VOɿ/EGWnz/yY1ilj)^/J⸺ŗ`X.WI@ kdalHE 5=TQh{8lJ;{=8NjQǩ}Z:;qW&ZZ5 9F$]T'  F8P(YZ{AzHev`ۏoxyC4Gk,[\B,`mlPok]u}7>Ʒ5&ch_am_o%͜%hMYɽW.pҬBגXmz`_]l{u"]_XJpg`;*|l4vTrK$P(fI9m5἗[G )r7hQH!ъ)JJD .m\$Q[;b)XYs $62¾\~AܯvYmZ'JVA şacH;IbkRȓ$80]Ar   NAD@4SP:/B2upp@ڨT1HL45P )K X(Oڄ[#TɆugcp+n~j|td\7=mfZi;^[?h7D5 Չ"}g2ȭUx[{MgS4G.W+;6luG==CקF.tEUz9GGn:/K5bKhޝyfaK%7lMEhil;Cqm,vN78χ:5 sv2Oii v ۽bT_zDЊ.2,>7Jg]1@Ǔ{ ~`k^yU)E޼?NI M8I'ӛI< Oؾ?ahsUC;s; xD&q)=~9Jc|(~Mm3~M#򞋶> ZVDx߫F773-䆚:X}T~:;n~^_ixsq9=`Q2~f2V{g"KqS,{B؜ p4o&sWgɠУ!<D$G M. Z^8l!)a F~ݺv <5k~7-;Z"PntĬ&fAςvPh+5ri#'N շVZsg|V~063\00ɃJ2#>[fv*e'䭍6׵)}Ľ_.tQ_4{EŮ?om(@y'Cq=j$)ҩ޹" $F (̊hX9cq/lAA^a) Îf\DJ)e1LNjc$᣼h@S ZD OL3(sSwR}^$hJ]Y6JP %&5d_ȝ "锨ăCx j& <פ'# G/տgy7n |XW7۲bމ\ƥ^M_øs@8i淧"  (7hj#J^a*0- SY0@\IRRF&\2zRT AN coXq.@XeL* O5Zks 2%7y +2t$PE?y׷Wz_'a])⣗MJLRb"Q&>Yv*c偪ʘ|P2BSe%U`z2 6)X{T^X: W4'5EDAp$4˯ryt>V BԌ{GPvdh=[Nv:;#\MWx* " ƘNIm#7 *P (ӞdF'j{,E4e'b\pRs"8 O<>QE>WLP~ yH'{!YMi"6oa6 3{#\i9+?~42Du'b20JiH+`s4Ѡ\+H(J!$HvZ)C^{-ǂA'|!K Iifl ؜Vi rl eυG•G7Y9R]2`n@arz: Y.Dœ9UA&v.ըb"$ǃ̛ղ\ س8˴%>m^GcvDD05&D~ۍGal k76:em0k{iK$wG"A I)Q8Yd-=<(&!8(ʐZўG\3K|#r~4@<Q٦7q>,ILmaǾfD3bψ-7C}p9IԆ{ш%0E5c g=tL2QEŔhljD*@8 $( X<*h;2P* >hE!/Ng>۴(ٗMühz^yqc.+d0wlltɇ4ˇv#PX;Y{?n~y?2{,Y|דK?>r Gw^B ' M(w{<ۣ:O}|{ԇ3wU|0.;~[>컅%#֖&Aۛ0'2a8-,B;/n"&WVDTu@DᢶYS[4} Ӧ VkPυ ހ1병h\mV95ʹ[<5h/O g{2ϖ<‘?WLW7K.Ltm2MRO.=Ii%IcNR<N .MZ&Nn7\+tpv=JH!B8(;CW.U]UF HWLakDu2\FBW-mr9|~OW/8=#:DWXpp3*tQ.e0(`K s;CW.tF]eB2J]DI`hU QWUF+[՞QEҕTZ!ʀ;{W1Ѫ=]@RҶŲ'Z4P.jߝ%Ǎiv`TU#9of0_WE;gZ5䜱sF3j޿ޚH Xt+yWh:v(i%Ҵ1trX2)}0_掅3l1v0ŵKi_\  WUf)7q+G+ݽ1X"zYOR,Jߕ^ӯ#9|3~T;Ήao?|whŘD ι+YR16A={'I~1?6nHONLqdX!Azr5L?{f=hm=d6Ѷ rPw%;PN6A CNBakͬ(f؟ %wFcbg;ϜQj֛[v8j7v+;-]':n(Yvt%zڷ)B\w2\IBWm+DtQ8]eRu2\MBWm+Dɉg)!BvG]eLv2ZnNW%K+`\.]!`ٻpEgUF+tQ* +$]203trhE*TK+ɴbCtpMW*=ՑQ^ ])NR+)aٻpOh7{W/%76@A΁Q(fl 덕K0t3\ٙ=Vށ#4ip%}Pʀ;DWnw*UtQ^U>[v<#@>`ʼnj7p=О n܁dOWv=EJt`3tzn72wB鰧g\{ʑ3W#@v"Y d>v/ uFly%y2Ւ-݇tK[< ym@`{ͦ 4 [:D +W`:T׈W\|yJhFm ʛ۫jM{nkZ9qqËşߴ? OހVno[6$Gț|"_6#hđOr{AC^|⣯xq?~Z&!>4UU=ӆ3?&^.62SȀnon9/=;Õ닆j//n%%[͠FYwN h'4}Cۿ=oN#<;vow ޚlkMfD{s~IK#q$f)'>M8fVMTsz4qٻ{;~.4~gE%1J@ݪwύ:#T*;P';*j LCzL\fNztLʥm(b`#0=)4Q)E-/(T^ov1D;R,8NJ\n鸊W_ xϡ3TpwW:)irԸijN1LSI [ Wz(2+"; Dw*q%{(qeU 3DpPJ2+uJ-WRWG+= 88 DWօJT./+I; qAKì]A_~]:F\ylT\y͇p$ZQp%j?w4|:DWG+VF*E0ap%r0 jKǕqʹw5?U\g$۞;F2idZ "-zIq[-}i ^r[lYX-3l]8xLd޺x3idhA^1 8+"(K soK 2WB4n 7O8+"יQp%j[:De\O|={}z7 Dm\|1u슫W߿7W9vϠ|B;T9;ۓDw M<u-?xËA!ߝo~%hwuwl(q-P7ã |IW%ݦ|5~Jo6?m8o藛=bH+rqnQu*~yfke׏W}dž ->2U>ɽ1{OZs~F qh6 {~A>*f0Nn1 8/|F|>3O>1o>c/mFwY\ Pnim?wU^LgpS2Οؒ!\qoڳj9"t^}\_ F=NjVJ"/hUmKɛtQ*k=覠rLΚz>;CQg}>)fU9htNZ^g\H9WS5U}*s]hЁV Pcn5K wɤY;j͉!fZae~N"԰:V1J6ׂifs6E}Te\^| jW,\XSZ@t Y;\ ]ӔR=FIE$T01Ͱ76{ptpj{))jODDk3E2{Vw4Y׀1t:p= )(K"LEt(ιcHsڧOH9Ƭr69{H<]Q~u fcQOD+`E~ݾLɊMq-Xnj(ϠdyΙbk`P(}ܟ;%hUEr'oj* R $]1ڪȪ{s2J$P )%@$@mT/  elk6b`jTv|^YbX&UCb]ЇdTWuO݆F)Y3*]c@mluXBs+}ɻb"n,4Dž%̆=ƺ2?sͅ.Q";\@R7f=a=g%{`UhS P;wMAQ U6#DRO]bb0؎~]Ze R5X9I@  #I9VFE2R}S:n3w%ezVXΨl&!-@+lWNc+!7êVrEp78c)(|Da+O3gBPDdEIN]+Fe{].dԭ9 XB?C7a( ` *ALF( S!JAT:7ud&/0K* ACM TtgvJQ"Hq;ŢA(gI;"JPw/uZ BRFAugDQv`J ׭Td,f^nO ,$2%FjeEtP)6 Z#b $2IҰ[jE}skȠd\|" ʵ_nݫ}ŌTUEY1,eBnƏokWЭ3?8˦ z` o9ZI|txK0upn@.¥ϛБUV%H9ҕkdREbX|LE#'b]@EE ʃVs$rAV 2gC!`$b)`倬4/@R@e`YY'kx$ۑ< chcb幹:Yة׏oTC}^شXżs ¶VA;[0O??yȓ碹",'K 6 XB;KRnw 9(P"QPG݅Z 1sP#L϶ASQ>"a5K s[bH -hg[R+yQyH01^aj%ĴqtzFF,Zz&`dY#ĕ۰[,A Hd4,?=(FČGEDޙU69뮒BiY&RM9QbTZȽ LGjY!gѝMC^6O`a: `dP:=9Hz 6 "qp,kJH8AOJ斃3?kx u5Z;Y|lU(Y. tav ԀMS#I"f!zRKh`[c"*(߃jY߮dڀEN[M2U6իA4,e*nT&7 L3$*'x|1xV|%3߯|:p߃4DS.dpG2 6_0ATVގJ8k gߞ0Io/7~ծZH ώ+9=^[ϛ%/g (Wpp{WfG?ԴF朷ɊB)CM?/Ț˸MPtz3`ܜS.&0atRK&q|^ݸIQwQ!l_'M!ܳmT⒊޿R{ztvk@Q1TK)A ŗV-ciԓ6ޛa Oлu!?z;Lu{7[ߜ-W*? Gi?N.ū?(x<{VeV~\`/1ԇ$ϐ nzV W ϫ;F.*HYT|H×sL.g+ֺ­l0VЛ|o|䷋^F3Q款7P"DzJ"D+&IM#EP:ţ%}f6E'd! ͜HAqFI^eWigd) 7L=^:[v[2K\&F 6D2̑sk%;t5EU&(kBpvJ{V9W0z*!UZa}=Jp{\qBi+O\:J+ܱ*thszvN l+k뚫I똫'Ii(?.sŞ`Cdg}B {_9 f&+nHu ~m6hR9&UpIIIV8!=iĨ-7`g蹮<Mahte;7\_>ŗ?#ryOA]bY7Õ?+֨mLDkc ,@rpװVtV;c6{ _ܭmB(A^:>P9kSߜР $+Kˍ ШUF(x&X˕\܍G܆ݩ̤ n 18S[3\ݢHLP\SFuY@Բ03l}_|S&98xWc7RF.3Ʉ]=Y>G kd+gBjJ4)xN1aP; %M!-Xmc M[R3p/֫-ۇI|mBmoc/hCGAh'7uF}|sXw6#, 1-${$Ch.A:zᬉpX{ o"b}.;$fZ]b뢸גdm7%[G +:*klczcrZ^BU]lӡؙ8q>w>#a澸Z!vSYp!5u-똈^YEփ9|2z)Wu,(QIOC|sdAjs lAu:8ܙ8GZ?XF2Yk';f$?]P9q}QV۫;m7xˮ]nnzsCWn)rK\+EB[,gl?:SLE<#uv"z.Bh1r|QcZ"G}(+IJfg\~AtY6YFOErSt;;;gqf7Q/}C?gךq4چ7Q"k}yMkI6{YvQt; _fڔ Ik{ U(~*^t-/-@nw7wۋNvg{$3wںۚ7.Mڣ;C 67}QgY}t"J`x(]K|V`ok`K5՟ߺ{l_ox:=_3v}fP}%rbse=%8R mR'Ui-=vڤJ!m&}/L9gg3!٫V1eQ)'@z4sQc[aTxVFMLhr+%YdP'S)x<6ovqeϓ&`o A2-6 )S,$]bd6Sڪs%ӭ/+({>q*@81Gc;¥ҁcݙ8-v Jy,V38jA6* M# β35|QP{pY`h1fpO^*|УF s{#+ ;̫ե $ehyS >m܅'gCR{fJrZx*2N) —Qࢷ =^oIN\\^>wHFٽ#m-0Xj~ST ?(d4ŸH0-D4x,>Jl= Zd1*{ yϬMEtNꘋ30?;-cg7 !YZnOb,x/e,Rub~M"EɩL geYq$îm=B{{|eqji{`N{ft] 99wJޑy0&_8H0oǏd2 RFIANKF ̉4y$rp^'fɫםşNAENaAżM~Ѧޝ3aֽ=_V?wW(Aj zȽԻ=oC#7v~M}uC={&<:~Ix PKLܩS)TR /R阌yN|[M V)KʑЁOJf0,DFHPjR&߈OQN׻JgQ(TL^-Sɺy%2RE$+eZkmI_E_a2`<3ع`0{ \QQ$$g _5IlӱbӀmlzJ8+sp=qV'==m.\p ߗew|u 6Q~za zzi8lBTpߑȪ3 *GGvUJJM8(φ3-0%傰rN^Fltr y&К )<&eM⦬*`ιE~5wȹym4tuAC?:et˺*C/)غAW1 bJkX IST5]Pd$JHc+ +x)v̖ȴ]ba1wNH0.0%cpN+J %ZW:CH9I3!D2jl7 %ϗJqq(%B@,@۠UPXƾFIȺ/㌝X:ٔʐmy0eWM}X96R6ߨvICA^f[w96lg I rd.hg Inuy/דAJz09q (Ne" #K$k<(sʙgK+0812ЗDLVCkI)CΘy=Yo(g\}~bV3d/!KUBa"B JE qzLΤa!YFG!CT IT) z=_r.NZQ8N:R:<) d0E~ZA ~'w 9' K'%)+FƈYс$977>(^g7c>M'!x:Y (:da ƒd9(3@<#LZioFh;#P@MXs DFkid`ɘ4P HlIT*27IژNG/E$HdХ0c: ǐg즉.UL Ӂ5t`d734s eN u*ntYv/*Ov`C5> %`*ᜬg >e FCvd_+lZZk5p]AfC,h&ƘyOo Z d |/^-3O7R/bA6sxS{$O>jך1}\QAT91F|e%ٗgLS}Dur}y>xf ¼DQf.X`B@k6Ic3h^{@_-xH9`QZ[Tē`% A"I M)^n S"{}{7rvR' `H5K 2N:j.A-ߦH{7Etj奦:g*q1qWD\9 ܠxQCC7u+PԷg77H K߂ȞRJptVY|'ƝF.s1HJRq"Ez Xcޡ'%v5~t?&D#slJeNh!'!SZ$e׬ͺde:*^h۽<7:7A16%l\o=v{JRDr+\g߸}rQ֧48O#[)}xOټpxCP63z QI'+ж/֗h%l&Z,gtOj͕*ھ.Y.S~BZ6)mU!k}/UxrK#)ZR>o{s>hoNg޺=njd^)u8.֕db|Wo@FElߛgQc Gum-bUx٘mA{ߒ6,`]Nq^so& Gm^ozR%XJƥ:P2#<6'c~)]н]Jb '\6M䱓$d{]4?M6}_ :6!g4@ع+%r+X[ݴ .=n]P-.FNͦyýN]ﷶ 8\-Զ׻nI o{Ǣ E}N-Z[ܚwp } 3 wi{neJ`i P7 .5DTٕ<`gdUN瘴\a& ŞEӋi%gxh!3'˂z~L gN9k3,qaI č/ :#WYJW80jw izą{8[R۳E]rʮ;PǏJOQg!~𮔑UR3[)r΅*(T"ϸF߃L{~뽨^i:Ɏ|{sx Ur9$de׎!:Q(DH>,"~y5SnTlkN:[6n f*Pe$їLZm1aǛ⛓+˹<5HxDuk|@ΖHjE27{Am>bZpO?Cc KlWற"HfTA`֣F$"dōl߿晧7:H\|NKT Jd$Bv/2 ' AW2 ZmddcT-J Ì7NȀY:bLpcxaTh4IWbzrNM? '} '-,ѿ+*ѓDz=I4&%saLUGv%hϖo\XMڪ$2X8;#T/q+G4Eצu4=o䑬,ǿo/S-V!B0}e]$Stvqcn*źSYSCkCr8:))l>7M2ؠad<E_[s8/yʔ,,mxT̗zݢpjߥyJjOV=<9ݍun>xuff닎G G4U-'dMr~vo~FwEJڬ~?r2vRJF7_VڭfcA&W}sӄu?Md6; >͠+խ;ةیg:Ƿ 8ۈ0ʺpyemwQAIqQ⛶j.S }xx|Xc}ܐ8ͪ" d#I`Go۳e>-OPJ^cED=~w]1{ 3$~>3]9X)$GF:ց!U%."Ajpxh8/a9Xm!X]z<\Hk& B[6 x.Z;,=W1ac„7&쮥r4j߅!q'(Vj}=\ǤLB);yOe+?w٥nEt-#KQU=bTCq d?{Wܶe Jޛ2V&v&ITZ$)q\sRB $\%ƽO7#A2 ^aq x qQ\6 5nN}l~>0$] n}>7s%)5jׅ{0Bo jS:1f|1-:ʨɧ&zG8-*[WYuedR!*>T,ųR-UE}*Y/&%D)u!FXbDTQ ajw[.+7b~sF[B7zf!% 3Bs4-ך8$jϸXcwq?7 W)% ˪*(y Lz{:kt|*%ǪZR2E=h#Eĕ`L(^*'cTI0}"y="|?X*=taeVb9JMnjW;Ikb7)I`ȒD{-%;s5qKy1vԫG/u. p q &#B׊VTs 8wbƆy9}CUYQuޫϼ l1Χ;pZo.PLHyҟUJ$ Xd>9.@O䪤dpyRd?lyWI` \%iwJR*3+!҄z@p`*ءUV}+ 3+5RWpX6fR.Ί2*ushW a[=0=~QL xф~.1/acX)H##~@0``:;L')i%Q$@* ,`JIU#ճ+-)ks?\Ξ]C$-{JR6A;:t"k>zrd<9\m&~jH\NfҲ'zŰ pE6+նKD{+Q/LI-d(E\^edJ_3)JIDB)3e"{h8]vY%ء,fWS;4wՁo^?{T 9WGQpG==R۠)VR,J(3FhaZ* sF$t7qwru2N=нƋAwÐg=G0iGUՇq\8H~iT: <_} o(A0vH:FK1-<)=+w,E;R{Sjs0hyzYNsQE1}[[uŝ.<*TNrb11uΪ; hZԾ95lT9v|eumc׆UO1yP? $BR;OR*ՑHu:`9 IO1- ) Xc/aN4GƒEmCuIsvJF(U6)w;k?60c!a(*ւpG9(0jD3Ik5r 8MhQjEw9lV#Vv,^kw)E .o-r 9'<Yb)NѶwoUkdcXIWJzYv2=.S !Q=MP i 翈` 8/@#&"B bqLX Pp#aZt e"LoYxn]bv%n&Ofo?^v xe$2B`*< pHR ,LedIk5'y&Zk2^J#A!!'8B NCUCWAA$M[>S`~SMDݽep-Ζ-#OrИ*2XjՔF9N捃C"|:1SX#:ixF ߑ!׆ٵEqQws"D@_ryZ/LϯN[Yg6Έ?z?N/\/a:M\fw ؕHǃ[/K-Y_> Z/-g\f/t'?{ooL^? &{3#UM}?}n8Ah]{@<1׻qguurٗ"aǼx7u!Ȋ4qp,ޟ) yB4 qx>v47nb|3K ;Ǔȍʿn dpIji8?&(FcvL(1F{g_ңBkxal*pB 3ϡE`Q|6SZ{[2Uɬ4bTJ`͐Ղ "m+^&YU]kX/9??m[a[ռRMygAoM+PEO};ON"fI;TO`}Ur+˴ABnb3bDbԽC>y;+xд[[u !,&+nuz/ 1״R5]ҁBuݝmn3~gT7գ$kQyMV&AFxtK.jMVލN T}N-yB + / SFhJ臸Da#lPX*L$`Xi ʰ;kb;_{;mƵZfTjooOmycWrC| K~D/2%J~蓍Rsv/潠ڕ Y2BF̕ӂ,Oti Ȟ r@dYgFө%)P"[,bYE1`y`m1j#'J9"Hghn2$tB:8/ LD+T4hY {斂4cJ|f`#0r1D5M[@9&pC pr<\cGdTac 30j8dG(KD~I]e(km#I__wȍݏjFxE.b.F?e)RKJA{gHD%S]Uzz-i?2&I oSr6$P2X\jNGiB _6yp)ǗI % G~!k}VmpGr݁%=f^U_Pq⪢MEl#ٮHjS,lYOe1?uF0]y.1Y'-}UW]u{'7,RGǓ"X9$_~!]^GVZ#й؅%\ҵKdY{F0ok:LWm%I>(D'jGvΆڊvJt߻E^,_ V=n7Y<}Q]nٕ_gG 5:8i/bgѴSt?¬VCiIcC]9޼7&@'㯤'Wwdl#V%6R1ӀADxP"Tơ FNNY.dv:8?:<8r:㕃Mx2>"q}B|޹$[Vs=ɮNjȏ!qNh!2T* H ֱ/l eX!X}Sh~LJolʌ[E6[mk%VNE.<_2!R&Ok,ֹ|P/ ,u DxD0mo[ ,AHm $-UZTh~n+]$Q1?n^zA{%G3@Fxt* f̸$ VD̕Uaː/peXIaL;Ghy魠#LK{R%G(}w&Xo]j^VP6?xrY$ #ev&%}4,&{^p4J'N؋sg>GM90q~?#TV|8۶F|nBȟڑJ}nv}&dq'wN>WV}ZZB`?dKmS+ \ݷ |Η8>eMUyn8P 9EN}fO]9^\[yvrjF+nU\c]~bʉrnʑmH!$r59Fm Z|!Lp zS}"GNaEgoSF(iKlX]:7\HfS(dQ"36,4R9֘SWO{UϾOe61x>SF0 d* %ѯj}14סE`4 aluO ?ƕNZz0X6-r5.붾ۛ5~_a3ί ǒ3gΒqх;!FpR7-k u\]sv"=z$%,%\BZ.|&)-<*o.}]:x>2*g-%4F8@uZVdRpkGlCö{Ys wbэ*}}PqjÔbvk΍If͈1}Vt# Is)p:XB^U`d9\[K&Cg'C/cƔZ:X}ȹYY$wtYOӼuړ]}6{a-G,;Ȕ>>7:C42vĮm9s}3,x)wr} O֔n3B썁 ώO=>9m\.yg_e']瑞rIQS*D4a*(]Iz!&霹,#e\+i-c$e_g]&=@+ѽu.޸5d}Aze[Fy[={$g^jo- <\O%c֚}oHT}KؒНeG7QʸKA8qSfhr{Vb4b̒ R=)`@l.Etsb8W2)ښ97kzX.wՅ.Խ.ܩ.\~tzV4[ߧaV4n8\csdĄGU9uQ *e/cP"hbY2-C1\LqR)5ݟl^[m3v m`X5ekEۏGq];Memz#MU x[FlOE+F:ec0<<$,YzR)>JH8LȐYA&C$2h9Mt}@rUdd\ȹYF<(}ш+kD5b7ZnK7VȈe QL@$!L2¸B۳8Jp-I6ke4 r3>tȄ!(Hg8UY#V#g(m'8W'_g5.^b7B8*AF !sq7a wUa>V#޿ %Q\jV|xU3vǶjG "ؾ((sKkZymS11~k(h{ ?{ȍ俊0%嘏{7d0Fr${<[/zXjY&HG%!Zre`Zak e"ˠ?INw$#HfCAA`FeԢ=C(HyqF&`LXg d dO҂tO[:icD˂* *l5g x0h]4N{晳X(-&gseZ-eTOXWW*W7).x0 KM#V&yo=Bq=*ϥ8zzM4l"Ow$aQo!/~-pa,oo?nky=7jb ~v|/x;oAꝀj{7<6\ |#4=`t|\?>2@>52 m_]^__D]FOPeA/FwbS`%s[څOw| P/v½}m-Eަd7)D*[LΞEy홮 ͔!<g.f©.0δ<$8$WPy*[ Z͏}kUy[ ;RW:Ar- ]]*ՋQWbåo)SFppZ]=\~k1OVèQ Gv,D]z,U!XQW9:ruUURuU+mAϖ?(Ϋ0&Mڍƅ-aE.]}1?,ws/2G^ |iEr%b8:4MWl4茿[1w" +ӕ}~I7wo&1!z`C40#(,*LrLOx?ާZ0oo%Ü{{MJy$D`V2n769{ʹ3 ^d[z$@XԖbHG< jfbăDgS#لykb.vHq4U"I&49Xyl-*zkH-$4H8btJ1m ej䍭aژ)Q*WmĮJ.AZ-wANYh,ƛC,^eYݢvITD6(`%7Tؘ k#r22) W\%2wY\#sx? ;.)I4_i*UB.bW X.)[KhQuӧRgqf=?~lnU67;OL|gw12;hq-g*?~LTzE-J-Lʆ w) z7خ;1U^o8ɱָo-}6 û+R?vOW/FP\ uMkYmM9C<yZ3' ŧVT {[RA P\za夕ѷʔGm``2eD5q6Zbumh`F'q>آflO+[~5DHٱdbvU̅9|(ZSabFK{`s a# jd2XT#ɪ<2:#8j4E%r˧AZ/S5JPHswd2xqbY۸۳Gd=ӌ)-܋L睘^^ yr7lu `V ȶ߆#Ҟ@2qR81AgLxr#GV#+ձ9YylL{#Flb;6AΫ.\ 欄VYHKD %LAfHY0,.u4сVqYHDP8mّm y z4Z$y}uYM*H,/[עz;Saxw?K0箿) 1^AJf˵rv@FI7]6i0ٽϛnu G˜@4w@I+BtgOW̎<}ViW]1Gq22Lh zYVAT!:a L ҇d5|YNG_",f)ylbnbw 5)Win(Z$EYF|R`T(i49x٪d;eLYKIVg>1M&'Ǵ\% UԵ,^^^h82 M|띰l \!!KCZC}||V/_Z@1D͂*JTtqOQufl_Qr"q֖Nilc"W6.QaU6rFuZ]^-B.>H2 _Zq:5N۬)tWf!$ؒ4ܺyAϳ;oR*l\P&OWoƼ #,,/@57Y~n:l~ lN擵4է.?H)qL zE p!5r՟=rT 1|̭N.RIV+>y9\^νLzF"%=\Wmxis>9>m{Gk9D9D˕1[ S+yMAS gb|&9qv>r?T$Tl YV99 T&7z"@i#N&Lʅ6@%8H02ixhȼFQ's̘hYR`AFli_t5o/IϐXk߷-XBfyN 0mmn`;dz d Ӻ$a'?F-q̻b$J *I[\B׮<]]#ӥV;@}vY1**-iG_/1Lg*+$t)zDHɸakmXeO]&`,:{M#B?e8AVIEZ6%KpfzYVʗ%h%~" `1dhD]絚87BT*;@m]q'_dϐ,Hj}Ѽ2)Z0!﹘KF]F˘ڞaAi!VJcڷZC!3o@Dk(8A~@}N"H $ҐLzmK ]OK}`ddBu65~C!u=?N4T, FF$UhIKBDP* rK=hB2 |*`$=B DԼv@-qn6t2ĉRuϢ'wu;7Φ|N[ٲR^g^OX2屹 RHi oBf'p'|LN*Ƀc&L"AVqc&OZgM]ן_VRڕAkLB$KD[,'iWKbjOVg@'9X s_9m̈́)CKc~m6μՒ^QLm멽}K*VvGBK۸nDePiT.$2^bVX dJGeKA;p"JD0igMo,xXpogh9 BXKPΣKy ԎFJŹ"[/1s8T3Dz -RDF_I,嘌LxV(fLɠ> 欨lg:Y]J?-cFr"-౬rICLJzGhQ!;mIqs"%(zigEW@Na/_[" c9pTTJ '@*N63Ldǽw=ժFbItnFksɎV 01K!Y4*xe 7p_BUcCY6\6!w73བU4yNT4n*F>TbT3mI\ID{k7#B:kELR5UQ}&2"PBs6 cL8V\\<[+qXǬd$to㜌T6s5 uT)Nh{0v똜HH8u@BlM樫tqXf( o^)$EfBآ幕̉K-,٤Wj "($E!)[2ReQCMN8YFeM&.8sxN*GD<kRBm݅N ý=4VeɈcJX=NPL\WWSGTݙNFa`iU=bϔ6Hn5bfyg#\g" ^ز'`!'\yQ rE'v), iIz!T1^D}4r,P&Ξ> d)[ dR a",hZhȵI)22H lRl746kPM鐘S}i$$w%/92hsd)"YF:,)omWbЏFdcB{4%gC%C>)5D0!G£4^%Y :C=7Mmc[eodP?e%C\CtQ2+筣_VƪFh#V:[gP2PJNq⼬O"gqn}, `f)(Ŵۺ\r/R`A@yY\rM9w'"Fs8{N'μ t6C7(eQ}~KUKֻ>n,91v{C2w>b'AEtIk%,]x,JQEN;Ix-Y&`G{AˇW0 T*CnD0%1D+qӔ+5yR2Jm FfC$H"!rVVK" 8q;ƃ)Bj c!!9 lIeg|6'&P;9)lvOS}D]]`M϶]?rh>;tpZŬRg#@tt0hVDb <ъ YG̵a{!d9[aɬ9JJf^+0W<2bġ$RT۳!+ ^ xua.-9麈1DHXJ $8K1foS?{T?WEUe(uUd+qdEW<\,9}|)43C1YC o.h}x9דtG6].:iUEHLIGR_/`MӝlOa\OmְZi=r~%@Kkċ^(coqտW?VK?||;q/nJVW7%C;|w8?h2;yn8l'\*<"fpϦT{WI; 8MaY/wn8sBtqEPu [?ۛO'H71gʠ I%٧b&rb"Li`10OMΆK;`b Mys:!qӒ2Y2(Oa<?o+YT9Es|z:͋Z}+VzZ´Jɐ?Ѡt֩Wz9Dm- [v7_s ,mm-)koF"q^RnV\`f򘘲;]<H,Ҷ(aW?[Zz)(+WA~L>rn87+up K]/>8x[?[fndhQL}Y..β?[\-b~2@ !=8a |Z&}䃯gVY[ꨛ'c~ƻ:cK5e n܇Z vdݩ;!jv9Y=IgyPvs-YUp lHڌH5I3&Y:. T9DeDC.2U&Ed_Q Ox:PЏ)9CFGOBAsCZPst2^OڃlPM7nY>z3-Fhy3yc^~Q*'_*{Tӫ0epyC YM7Y/<~x, ]ymxQ iwF~{M^zlM\ ;I_i w.tgN2] 44vDwv ٗ9O,G2lL~*6t&BA\: ]z9x!?ֳ ֫5h^-7u~\.[|ztz2/~Qʹ]qOg|toZLWu_;{~Ԑ66&sQYс{_ \Q="qP?* u =v-*/eR$'J!$?{WƑ ?0vTwW ȇf5~$8AUIYEeχ89S3SS]U.K(DA]0 TpRr[p0GmbY&<ɬ%Hmi@BZ钶?um<ɼ*x*H~OO!jVݫ<԰;RFcvxn۫_w-Q*][5Ӳ̷ַ5~=Kqm>CMs|_V>thxwx(UJEuWrO%cðCxq³T/MR;L5fG1+Ғpr% 5+6?ijK΢P7_FD 4DR1RW[IHZ7>8an McղJ;_Ż-~¹+1(#QLV>O؈'ZT>UcR" A0Sw:\Olu>O=,.pT6tG ڻќd{N$u:NAI6"II%W+Q(*TY6De&A2*:.g) s uHч,UڕduANr3qzOk^A^~[i!":H",$gMN'@iqIS¨K!fl-Q"?Y?U1C`#qJNp0)8'066fo^4a<]ߟʶfzԷN/6(Yvb蝃0n=3[~RO?SSTr4P’qJP%%LD6RMm( MѢ+)DhzrĈlnm]NVZ4 Tkdl&؎4fq(Xh,RQ@eͅ%֑V܏a6M\SAfq(jƨF&k+e(L1`gb` 9LO*+bOO<%5bN{ %uv5l1Z&Qi1IDv16~0^Q{$ ڏ =7fݠK>M<i>]mA CLі!8iU>V#> -Sm[tǭq̧\jt4FP^Đ/LIE*ʃgI("蓫flK?\8reunַQ:/gBadDZ LE5j2>8ڗ;Ѧ@cjw#Wנ'ۓEh\PG^vdґGBF 'yHպk%CLZAYJ^"&Q",O]Azr!Zƒn{(eR&%,:d?@SX&2AZoh%eHTC^f-tkC:3un{A 3X'~.aݛv-vv׉q`GĎ'Ǝؿ#f"Hoe]R>bt 6 y466ݯ-=%IKMNi9q( >f~[ڶNʝJO# shWdB:}tUӅԁdO283ȊDuJ\詨Pd!pb"Gc Vji{h#1~R:woc2[R͸ϙ<(ΡO%`˯/|{ͤF8 tP.FD Y 0Xj7 Q`,ɵ_\.9筱&;o=͹AYedQn^ktFA6&$f)I&%Ib2lZY].H9EcF#*R8%p'-lj<ҷKf:.b}j%;njvLv=/x,: 1^Z=ҺA*H>W6yt>D2P  @>Bπ<_gld (# ؇PK -Iυ5^Ey*]qRAGF'o(e`ZQٜH!h BڡbȵҲofb7ݗ  ''֞8/ lėZx5O,_8KD#bLAefPP% 牁-913f㇔sinYŜI=y:ﻚN~jq+^Y/TCȡϬpBm0يk H|1*ϖG5gQ?6iI6J#WSm[]{xٕpۮmk[bwwk^"~틚EՋѷl_F?lCŷf>O4wgo*|g/~xx6[|z:Qͩ/)3i2 Ų$Kgg0[[g71MkIc|='Ye/f8+#_ta1V8OVO?ގ7♒QHh}i"?~<ߖ)}mk/_}7_ -nc# WgU\-[.Ǩ,:kwziwsss;(|1~{>,ҺߖuzSZd<&g3ֹaR\Ŀq{`O֝K/ל*DՎa5՜̜ٟxV[)5:%`"RYf+}Ύ2FSb2NG 24Y\_X}yYDBi 9Ckr@E?a\P(:!t(=VbkzUcyRͼnKܬ:~O:Oۛ&ϗwNW'g,3$^cϲhHN.:ILu 7Xn3==,xZw7X!jM6> *b(edK*M̋SZY9ѻ, S@ mcT ,CkC“nlgz 0E)12'[XS,eM3 cHNcbT(B6Je*Z/E@N?3IVsdL2C䅗2h39U~ZͨV ~']i*+R)&KXc&@! k4IKGo%翈MB{tО=3ིL-<S!PO&Gƻ$AXbN2[kk"²].rWsIE/Oȡgf> IIiE) dRFFmͶ+_9x^A"`㊌uq6*LC:׸zkOC~UH@QzwJO$j&!aH6yd͓Mn %,ژ!ҁs'HdʖA34Kol=%-%e!de\V^{Y $C0]׊ٹ7dS6:#0FX" lGQ(y" 6gzH; Lc$~A &1>ѿfS[Li]R &x,yxKY&%"=q$I_nA>"_HnϞՌn7I糬|BЍ ~UOi1UYΪd7e9؀dz&qltPn(B3)c$pPkF@V;E!2h]P`HuY3/md(^94B`H-x0$8D(˥:& $b&>e8Uc GbS3= O8'aAmzb iEJ&nA1]UQnr24ҙ(䷈)))$( \&5 \£b_X0ϫhlhl}$砵_k`C2i+V:s3QASs`;K^/!ޯtU*-\d W}WrG;xp)z?`xy)pNAƒJ"Oqj]hHd2F)\+:9p&ߣTֲڒư ?F.ڊ7⻻(i:o[޶BQ/dY<5x8d^ŇQQyq?`QE:G5=w~*|JX$d{91:u_ 67xO@!_{|hŭt=*-SY68i#V ȒiKĕ:V;b#w2Dnt!Y5f#h#Tp0$(*j( α\bVy,Oxʅ,xn,*UI/T&H@xF*`j;A}@\U3Zҁp ]AkmUu_,4FuWn+ŧE:p1)yylܸܲE.T|*B- h) JY k"2%RLO;ZJZ;'xK^vf,ٖr5T/sQx v47Ҋ@#KrkN2L:jNSujP uɃLU5h5 IhQ# $(@|ccm>DZk'u}vb]w-I(Zc Qܕ9NḏJj㈲+<ޔHow/B` )Q&zҐ\l "@rQh 8ȓ$v.!mpU)r<K#0H+5Am=1DIhN/Lɒ*L rlyn/$hY9-pM1|`x^Y8EeTUpF&R?:~ĥu!va.hW'櫣pm]E8_7 ˊ/ӣ*ONuqf3$dڲ}n/xW-Vgo29IzUy?[16Ă181\Q98uGan2Yğ{ vtj_em@S 60/C ŧ({y6Q>~({]k=J,6`QPfw߯4̍>"'?;ݙ*R"8Ԍ6`xi7] fr;(ª #J͔7U'?o{n͸i?>G;ιxFM}=Z— *B9.`v{?"wmY$g!6ez,/bnE. 2n첚9 dҬQEי_7Լ>)j5ӃRmUhʱsq=s@N.ຟ|Piw +(,P8E f|3wS~wx\f!>$K@R"whr4T9EUNU9(GE}I%T6»dˍ^S.F & T5$ᖘH-:X(x+HV$JuWO\=Tjp;Ia:IR9꙼P,dkѡg=?:?WīxY k; eq櫆ypk'؜*ơqo,e?P5;k!gO3% Sѽ@d&7Ap9QE{–P%oh:VjϽ_|d"]fϬX 9+5bpџpäPśp8+v=?Ԭc?.֨j'ZkJ[Ihc:[̰çEljT `>σE"_+*s2##\|]|\o>Fg⪮Q)R;:!ğ7d=}rTޞ@De&d&RFs lL )x NȨ1fY,(ChbF/Kfܤs.HFjlGz\Vӌ}PWBaAZŒפ4ܱ,, 7? _{/ ;p>&&U3Q>*+=b*e/cP"`4u2=($()5O>/Pm&ߎY0)RG-q#XPvڦ2j;JR|2!Oh<2R{,Z1I$IĪx6.q !L,2hMđ0fY :W;+W5qa/ `<D"C7znC%f( 2p+D&K wMI(CL2¸P7hP+[7N $Ǭ рg|BIs# |GF:!-_8;dU{ٲH1uVӒ}q+"v 39X Q2s\)2؄ O,e1ǶFQfu:x,xXM;Cz2()~|G4Z7&sNǃ=*{BRoaC*?˚q+UlH-]Da ]ZSZ(ϜB縵yJǀChh|F E;4DzA܁nĽsZ-mR,}2 SQ3e &&7AgwrYiTDQoa xKJڄrKf@b}f65۵b[ǎ=;=;w6GL*Ihoybcz4hΧa (BQdDņFh p#IO#\(qǛз< Ƀ.ni|W $Kf"~]+5:q((Q*iŕ(c۾#!/IrkajPkTd9NS~Jx8LG81I #+<L)3a>`Y J1X7`,g1M7#iÁ_h0#}/m0y;xN@W6},!_~={wHKYYHZ\,#KJeH3>Ie fltpD(7C BTryg*#COOĎ|yxf u)C0}AD3"2k#쐧 °^A7#ǣ`HsO"Б9 m4L{/%Ĝ3Jcd-,T2-s⦳jVib35zZ]G P6>~԰<_ŀn^N4qxΟ8[Ӧ \kC-%p#P!3 FC8lrY#o>\!i;sQ}ׯ/*oFyxSB]ʏVT,nQYrZ ?0VҒAo>%+~,3emL>*v~O^cmY_LERRvp^$&ߥylo@%}/_׷kuo-3+Kp@A?3 4rc0rZ@HV ^ϚvXWӍ^ɶ./'BY3&w" Z D-etTT1yNMd'wժF:?x{[YJ-QInbZ, ^!V(9wӜ7DG8j:ҞK{N3TI<QlZT># IT 3pbK[_<Ǘ&/#t/2=&O?I`)/WM\|_}ݤU.MJIc$h%e_zKϧ@(Y;R_2omƳ<_tN Kg"ɵGnE%L:C|,YnsLi(ذJ{y_/{/μ^ F/jy7iOl}˯za߽%{%x5IFKQ'{io_!/BdR(rf,aU]]h{tEЭɀ ADC$9CThE%I(L^Ȕ2do-֍ ,ctY`Fu@GެAiГ"g9d-#gtYU.`;V/2 5Ǚ#N'2C4w9̔d2N 4<{%|@rW} f*]إą`Q[;묗R% &GLn&d ?{۸_!)3~0v \.2 -X#y${ UfHğ@]ȕ} "~[;Ld>2&dFj+ ~"$JZ_nw˷J-HF./~iww3*.+wWN*3]I1ͮ۷^lx1*5AR7 6_;!MZ%U m RoƓ;}$QJ=jo/eΪ/ىӥaE۔&dñ}jJ|1L.qAڂ ߮{3= 8X2N`Y7h2 rEc,vtN9:zǣd\ NymFp |< z~w65.Q9#7KOniZl7*9rZ8zo?fg:/|L;zEird U9͘  BJQK ;B`#"Xr#g*6bX{ةuV.-DlԩA` Ez멂`2pJB*eAF&@}ҋA7Q+^73yYނМanz~\Uk҅@%(dFRk">4hi-n6G{KH[xuoMv~\yG=ԅ5*Cz Y@SJx- (lsSy8FCH&#y E2‡0.] o`G! bmd_5nompb<H4 qv,Ys[I^ݺѴ[גV 쥍6KXm/i= ƝKZ%gjuƍ3Ἃk[A"4RȤB+ٓ>;k&.\ D)u!FXbDTQ4a[+7b$Quc({T_xedé":ͱ֒d6Z\&`to}&Ifޅ7zryl-D H7r-hL+5]^/dкyuY^=Lx@MbRvle -˚Z7vy뒏r==L'5͟%cބ'㸙na{WuN=55. i·?odK+WyJ%]͢JzJ:[!)˚[?nΉ+i-?N`9R!J --% }Þ{Ɋ(|r=ZE.1)ч\GePK#F(ȥ ,ݪ/7%2lvrȰaη{f.jmF.pe=`AD1N 9%+1\yXDGe2E8Rd!dE%=rKNJVW{^y&FdV4hI(971[{f"H/U]S_Ώf:T ';L _&@]OnRO-Gv:b%(.2d"%_)lp : 8wY9Tw>7 ?Hv8LVhtUjo[`&l;A 5[kPRoWCAp<Qe~ \]OdzśI"oiU"%q)UX|\OP"|yZrߎo w0&oݯ fecRJX[Ke:( 2{r+3U|.XU 72")%+ G ɔEEk;Tpv[/txESc alfhTͭjQ=|n GXMe)iQ L9uBIⰚ[&\JJ坮`1VYE 62( F刊`)AZwXT! WrW`zN-R]_Aud] acfzI8)VWam$)}b)]5Jw@|-%N "y@pXK, x g!o0n( ``W/ %F7@T}@szQy2,Lvoqu`&gHWEݺW.[Lg&3 }f>3Aࠌ"Z) }}5e+JV13co':M ؿLuঘC[9@]5|,Y 3sI9=1Xy5ŝƥxqi0l2 6խRK-c13!1zaN7kc40\"_J |W.pU+osE aXy+J,/Ad6D B$Nl'݂6X1v\bҥVpb#V=kcyMiY X dRKJ1FꪋRW+98Mܳ v>ХUXcy[RvGz2 Fʑ,ѱTJ2n0kSő =jqDvBW8Tc+Œy0@͂+ļNq $l!";#RF%5im[k$B;!lJ hD{L,hc#YADP"c6=/ؗ|i8 PEQXW&o-AX)VF@ 扔{ҬuXi T*阦"lĜhܓ:M^Ki\LI2,/Nh Z,h Yy{GW|+>VM#~dva-ؕODbTz;w| dL^fV4J'/(ՎQ&+wWN*]I1ͮەKT\iVΆgR$՝}`SOj@ͼZjxrs|;JG-W kK5"mE[FpçctG_&Kkc`Lbgkm}8: V̫w,cF1XK;`:y uu=Kb2. '<ƶ~b# 8|rߝM zT0H'1;gzȑ_H vqawlhHI o%E۔%9GVnǯȪX־<*k _idW;ޛ^ QӹLQ˸Um^U/hwEɉ=zrܓ# BHk%rEު5` )|flVyr/$Ai1.@XCѷO-!7BO&hKǘ lp*9>J$spfF3c*;O4?sLՐjcjy-Dܙߎ>iq'N≒:#!7ݹpt?ؤ .:PpVJH]2Աk[XzbE eڻ>.,罟Hn=/]f&X~E=-ťUK;'9 |Dlb,}2 SF0 d* %ѯjֹ14븰/i%3.xze=.ԻW&+bj Yw!Ū݉~ݫFSвaܝK.rV5,|,`0~C߮ Y-uV&[UO: *5]rWRU*5]JMWRUt=KͥCӹ~~\s!Lh,@ڦTT:`9sԝOl_> : D&z$;O yiI{s0;&hUiRr)i2-qb\$.2#d.s.)*2֝=96Ix>1Iq_7SZq4]59LmBBS:5m4,{ӣmG8s'7vmZ 6+M .~u\8" ٛ9ni-}i ҙ ,FnzZ9y{RX;rP-n^Թ]~S/e @0T R @#U\P7L،^)sAo]cg(S.a\=0~`n4-{K~7o,@li&A&$,6(hp*YNu~$|dnDǶm/UƦ"^ NLelTQQ*57{T -T[qeU3xmPy@"~8ʩș&q f% I]y?N^-rIQS*D4q*(IIz!&霹,#e\+i-cdzꈻqq-Wqͭ_琮eƭf9:~,LoVY3|GD~NNߴ/ JimRǑx JPC㑩&+,ru<ӏcT?DQ8|vYs֖bm傸^✼25UD+I9R*BJ傴.Ek3p:*C'6L;պ}ЯDADnǘ.jx sy|t@f".YJGs/ch?T``y1 ܡ ؎u~ړTxX֩X \r$nxm(V.]O>e02!Ӟ=. t_/N?>+#PY,2I,ģ6&A&.yLJe.0Ȝ-  sM@(TFI/br"!D$Oͫa*jl7tQ RwT"18ybbjJ'o>^Iw+GS δUFpoBf"eǬ!gMMG3gpUEF&ǘg%B#, IeEt)b5qeX;ۑWFSPWBa^,05y);*)ONâ4~Bՠ?r1p>&&tȄ!(Hg8CQug^'\.KZgQT\qv"P%:(Az9d0dO,emcВQiupXx"*V=a6׵)~|?wOFΞ&B۹0ra N1uXv2Kw%?JĥqJuSi/BIRJ8rcbTkPeH.a+0s1Z1X!ekȝ$wEtsϝ5Ǣg@\hzܝ% 7szYNlkzAߕrKGfKx>Vaǭ;g@|?3m܀K40{R&V7Q6oa~z8GKQ"ҔH%J2e9X-Q$Xb &`"3:9񺁝߯]z"gkoi7S( kJmL·Ă;S:cL2S>ue _5*Bct#cDzljIK3:ުSgě,8 8O2椞% ]eEl"LSfge !iRL{- wѡCtLBKZsֻ;t\ [)YOD+0>U[f:3>Ď|y^xfY0}A(Z3"%9F!O)t݅Awivt89N2sO"#s b4L{/%Ĝd-X*\{鬾,wFXoS56+A ֩Gڞ]o X#:Y/b7A(|LWhZee@ so$OXJb=]褅VVJ:rZT{,}mfk_wwջ6qxixs۳¬}]"\Ͽ}|__F.aH?GWeyW7n3.F7x90Vt3ݐw /z?އĄmjgѹ@O)!k{&AՙӿGz{h&aIlY4zӏqȼ]x2hP4;y`1.4?0(gg38Rw+bJg;$7l?rwI8+Kdϩ"wsGU轿ǀFXl HӠ"OYprvD9˟֜Fn#y`Ge`pM^es\&FA+kBC .5jAFē>% Idkt2_YA닑"7hWiLxƽYU͜D?m%gN9k"3J[Dž#7@%g3Y`ߗMq,$ fo=_8zVe@VPOWO?zZoPvH}4".b4B)JxUSsG*s:baGx1Ҳ}QPm&w-kwrB+^rk]#6TT l_{$^kϣp."2h찥nhdK/SkL]L>OV$>*单\a]]J Q-^rZ-tzJSޏޙ[?jM8x(^}K\lnZlˇODD]rfQ2[wŴ8>YeTBZF"Le;X܁zϨ>ne0VAJSTx{18Fjo0՞Pþ.pYD U] p@XSP^{-Y IFm[6\F\`C/3];WwIdDyQVW Ͷ) @ya=xRpc㌕Xz`k;$͂^z"%ESRUADCp&u[.z^::J-S%R`ZK9v`5D8 JK[GЂP+"guL9^>Ί9e5)l?u09O%蓜,$ v<tk}m "2z)L|Ṓ@$>UZ18h OSB(VSr(λJJLL{4U,ct 2&S`PJJ=BX뀽GQFRթ(+J9SK ċ,?(桮K-ifBb^[WnfWHH_0U)2'Ǟu^]~K1ṛx4$-BޢԵBH)5缉҆4w:^DUr؈U tZ;D6or$tҁVIj'DerVw"t2|MYbU1Q7G[/~M?&}b޷.Wk3ե^wѳJþj 4{?~ u+MAS/yvvT;"ZhK_( 5+m`AuIU-}Pʖ[qs (="Z˅mxߟw_YdLj ;RIKU^'r5n+CMIRAWgu\src[j;ul&69z˯\8Y*GCh偬ܲfC&t!Bwzse? ݅Nan:{}.w}xpĻ\cV(~ܽ~nNڸϼC6[|fT#T .._zJϥ\C h]/<*F TOXc fme۷߮*0[sPK!Iٖև-Au3jjx&vc. 1 oqHxj .kBd\/Ό8ٞơ&յ8J"a==|b3ȇ|LF} > w|v>wE^Јs?QM@l&\L Ҏ}rl0 tFBJlpEru6"v`Hefg\MWi&#\`zK Hn>S/Q|"̸ v ;wEru6"^|\JgOWorhIڼgeˤV O(*>򄵒6o8D鎕@/wo&OgLiRGT1@ SΎ|Jא\WV27v\DWqYyW|rv$\pj}T9grpe:69 +m]ݡ\ U?Owez̸ڵaJv:2\\sUb"Z͸ $Ȍpr HT*>jR*f2 ,\\r܍Wθ4ksP6\\irUHq5E\.ɻB d+X6 u|BڈWĕ W$ؙlpebF+R+G]J3&+#LkUf/ iO9\{7߸ (nOϚe 7u}^JˍC@4Ͽ0Bn7e>P(GdgG};]OtUC9ʲm7M\c^_N//oGdoֵZֻn '4懿~W>ThJ+S:VRHCGL;(G Ltkl;Ͷ1Ncxb-g[lmʂ=L]r~2Yfdwhv*Nr'( ɵdBQ:~*S L-W W$d Hwk$H3/Wcc><^3k:,z@E.qkslUz f++\."WҘWVfKz Re+{"Z5v\ٻ$ԜAF"e+kunBWĕ W$lpȅ\pEj;H%WĕV\[PQj HwE*g\MWF)rJ`mI\{i3cǕQi7jZS|rW$ױ\pj=H3+#uV2W08Qs̤U޺};v)%XK(-+LZ{T٫$AKuFr/e3 l^* 'p\M6V~GCVaRt=pf\\rvk HZ9v\Jf\MWr _|O*>v\J-f\MWhlFB{ <`6v\Jf\MWJlW$׊\pjtc4nqEU2 6xW$ת\pdq*)Xn; -M 5v\J5ڧ+kk;XQ2]0O;}zlHI'&6W 1*91=ILF#߾}9ƷO7Fs1r܀k-DX*5mjtEhњbFjd۹GVmŢsˎqswfMM>IչؤClTiZY{:!68Ag+,2zLrm6/nH}FT< :6=3g;|}^{]S{"T6hW6=4W(sl4HV:5v\J匫 Js+L>u<\Z`|"͸ $s,#\`U6"+RkaBWĕfBpEf+y6BP%0j2LLd'I {qE*q5A\9)]na OeHj뒮>)g'gGr]BZl&j~0ELRpuF9;+y#JjAK?jٽ\٩;f`~8=þVfGOz\ޕ`qxo?>.~!޿÷WZ7%R^Fw7vmIi Vns&:?[v:D3qP.u*ߕeB8ӟ26s?5,MS|b>c̮`lO:f~F ~s#]|"q'Z{";|F}"[ݏ薙Y<t7f5o{4^c ڛMon?u5tyU/rr'*JEFngU]}0˖i:}gfuN)4j̞Ѕ1*UeV[rT{U]* & iI}Wjeߺ1:RޛqSKI2L[-hsbtE1ҁݒh}k5D*@Q2 F$Fͨw2R9EgfѢk*@.o]AQR֖q[EwhEa AQ! A u(6vT7YR!]AK{qA1#3X2d]șcq|}as9 Ƭ*U˝ֽS $mM ;rҊJ u})%Dk(5)hK si_#'֘&a@zQimDeivIVCJنЗ3lm6m'r5ʳ`1ȑY4/S O 5S&8_c!T!%>{(R.Y hϒB4BFI׆R3uiLZ%|*/V́ ]d0`g,a{PQAQxi0mLWçAĹU X.:@2hAgC3cMm̭Lt%ρPsP\j܈\l(vu\s \/ *eRhIi٘uȆոPPB ׎]SPP|hI']X_lG?5m54FRĊ ̆veBClܸSS\, |\R}S WPNj*$_tB2K&@@7-V(!Ȯh,a@L!ՠPwVrE 2nP(SP|k(!- @HhPED&TD;-Z!3֜A΂ŜE' sG YoS/exQA e3ˁ Ukq*lL&vr/ōbUڛk((Sѝ*E4GRFyўeAQ 2#}Wm ()g^\F*^Y EUDI)bҸc`3/ucHH/gNBj2+-%KMȲFo8uWH:мGwU+czh2& ԙy r3/hO8V:G$'7**WeH;L'"Bm~0`SY45t}9p?Ԃ'UЍ3[˾tڦL[oT^Fe:UL!Jrd@h2BCTU_TBN5~d}tU;k@6TF] #Hb"a|o65 y'KS ʘ`Yg̾ ]{ #FK|uI>h@fy:E,ѡB-1=n;H Hј j3`ZӢYf}JhB;ڱ", kR([|Ps`y&$˽eg:|lQ1% (YCnf%lmՎ6<cs^y>.˩Jiv{j'u2Iv`0GwH7ۭ6@'eJac`s˿_Nd4QGmCǵ"^DHY=yh4vM .bѰMʌؓXAIE!9)lz@'3U8(YtR"˕,)TP=`() e 3i PAzπz]` {XDqmburӐpr%,RDyŰ*aSW(eQApnlQGb$abRuI` tB)Ѧi.1Q`2j hNm;c fnV<HkѬU3| ۸f<%4T FٌP-ŸC69yYgy~5k!|ƅ7[ApWlЫ8Ay E'Xr̦i/zBLz\Rčt4[JFDZ5SQzB%a9#t!oh ` Ec6ՔWDL0r( ;fAj\ BE!&SJ,tR@Ւ0[ Y\ yFAy58FQJL0H5[6?^Vܼ[{@9bJj*b=Q;W=<1>pSQ`׿|sr 2v?^$ajSʻhqvIl6?MzuE7emWW1npo;nwxC8翎mwHmǙvn.|Ūxqw1Mhv?}ۭ~NKq֫jS[ {RGYI_h`'x*B>(=#"C@)_"}@C~$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@S|@p6>ҹZe=PF#>W Y|@$> H|@$> H|@$> H|@$> H|@$> H|@$>t? f\gH{Ԯz> C$> H|@$> H|@$> H|@$> H|@$> H|@$> TPqس}@~L$> H|@$> H|@$> H|@$> H|@$> H|@$> H|@z)>7CݻV|5V7׷[?~.al]`?lKǶƜm h9<{P `[{﵋gDWlL<p@s+drBW/bT|;.c܍jjPW<|wU_xco~mQAͺ>>Fxj:lfcSFI +e74?KE-xɎƘ;r$št2n_!8k<:Jt,F; 8ᄆL85>qQqBǒ[xR:!t|ҷ(SMW:7OjA0O-"ɎHcKOuN[b=+Osn\j'NM1> Yhf J܄< rRLF)zS A'pT=i62OpJ.MOcT6֫hu ?>ÖoU׿J퓎u?n?41bX/V`_=t!PdIJ k.Y¿÷php1؉hzEtЫSEnvRVBX{>K mt}i(ֲb\Y`gCWwBW_ JCOTN} 7_muepctEhJי sL_@W$tءH :΅6銢^ ]hgDWgCWs6NWedHW&km#GcGg۽/3^c!d;A|,],;e*.Q*u<r;ʇuVAVQOxb(#lI9!UٝG}OGOS ȓ Bz6Ε::a^N솃?SlpqzTP2_?-uFi U6a2Ɨb;/K= UB$6K)-Yrei9&YeǭZ =ńMS8Twp*NPiB.%. =8mСR%PNh,ѲBQs^#V6ꇓeQXι [҂qGObyZ;e*ۆ˩Uֿ/2ZA6xsNOlޚ)Dp ܠ0"{EDyDq )HYI,1"Lu18V Wei_YTYHTB``hbމ@ ɜ7Kx`Zij,QY&GkYc6 +\]_>p+ιs 眯Z͖,H'!KRKW>iT ]V@v!CTF:_# fY*` +hxz0rtoW@җRJcZNvÚsn0-}l<7WT ώZ[':P߾}j闏C3DWO ]CYgn'/AOj<|Vr&1fյf>%egHSFW͔ ~i ?Jn|;wΦ%p\u}y˳/CgKOЏk|M;7w:ձ"7./@7_1Y,ܳ/٣?|ṔR2$'+/4"CߒUqsxߦitM6irl!=c>{a@5;*x̅^{=_B./UPKYiAV"eUY]%4iv&ey,+ m4F%VN<7+H1HpDÚ185 q؎Y #yO'rv/q&b|79]e$ͦnl dofPQ4VSOm45R:ܥ[?]>::"="68DٴR @ב*Uj ?L،^1!H]6]y|`q7kxد'ɠ2}yVgͩaRzA!>. +4 t ܃&{Gv|#;\[K&CgOg sr!TQʁkpcpvguÚdzG|u[S*OQ;-w'ryǍ9*{7r׀s!L,PAڦ JVg{JQ k9G heE"d=g Eɝ' GefZb^I ڨ4)z4.} {5$ kEc6ssc:|6i0T~0rԛۥgkwOomCɣ{d|~ӯY ؝cE<@^4;^g=ySHt>z$kuڞu3N  hg8RR_RW۳9tfYgH)+JY-Jn{xw>=r^K<9A{ާ̓oOwd.OP_d,-WEsyˤ;< 4oqoP[5{ys26w@C0P)i8rUz**U^TH.z=p^u#F˃T: :lrj0E":chwN"rXVI;O} ڨF:&& EƳ4:Qcj8?;{kz5oe7~a凂3I9{>񝇕/Qxdb7tb`ӻ٨Cl}U"5֦wq$oZRuLôSutd~ >SJT\*qf+% IDSm,XUWRtKT!:eFISAɖb:3WeDk%E#su u(mң߼x]DZ6,Gۢm]RDRO<7%hzQNR%NqdUQ)Qj<2U`EI^I-n(X`>9S&і9\K+3YZE{S.FHR Ke|邶.OT,p-",7WmM/CAG Ls/c= VY|\#m`-^|D3a]7c]ƴP` y 1+ ܡ ^u Mu*yPE2%GL -ϲ Eʥi Ff9$d [$C}_Y0eFXTGkmL$$yLJe.0Ȝ-fJ%ZОkIe]-&' y^Ke3y;6 ,aMb]M~ $472a PPkmH_ؿ;"`vd'AĘ&$% ݯzPմ(ig=5uUaX-*EjzqxHkŸdW"tzӋ+G(9H*@;HTJ'7"Hw$&!FiCчŸcG}(IY}p*`2' nuGB?JЏ˹Y[dON*{zU'+ƥ& 7*uyXGDw~[ fz_C;/m·z5S f> KqR =9&=e Dp\ \zxkB >]p?wHF#Mm]yvseS-aNg5uV5DQdh \`D$aĽ`Aݽ$eB胲9`rD#hiPNX9ϗs#gC3%*9B2H0&t8lWl絯9d[ME$i55 Z RF, -I.F(I9In1ɭF;9uJ 9G Nڨ!ۡI5ꄴ)$ѢrfI\3Q]$& Dp=KbQ\/,gRjYͨM,m#Z0d𐡪(tٻ Z铅^DɸK,"p\ӏЊb2jR䙡A ,!Q1dssZ~>.NZ agp)RNArC>0hJ:*ԂVQh8(YOX'vD>2Q]=2hmCW4iKL+'" $qDy U𡓶r+"H4⯋ '`,c]pLBF`J#ԙzURl)}XK@KN<ʉ;L8dW(|闸z>ˆaylRM层'g|3v$2e H&61J*O#'S}:KS>^}:S)hW}Vl} ajLhx8j᫖t"ĥ *A,L"@"DN!芡%,(a:2-MI :i Ve \XDJ}r@Y' xfNyj9((I!Fcu8򮮴1\+<8.F9=糜x25{HEǣ۬Z^0@gۦYyJm9S؀qNNrCк~rK$vlcY$Yc\F"Cƨ 4 tNbly rH"^@sD҂Cƈ}SδbmF IAgơxZ6[>.9H|$VjɌOj Q|3yI.y8oDSvGE(h콥H|dr6JXFq)ÁkchxvW 'yݱƼ8Ɵ~? &ptZl06*Œ01Ν$Ax7v;%HBeAN#E#d^>.bx9JMYtQu>ǦQ[ir,JF!`SǢjUz)F Hg3%-CsT;*E dRY1r};$ HwZݼ3'϶8?>[;nSw6}Fղ+eʍKX @p& %,yT Lx(t~:UtieTD{^X@"tB>1IDpc\1J "`h`-A4O͓u뇙O6 Mqzd¢ԡ[<`Q"rsRJ!")fFPESNIA(FkaZG !;Cwu/td7pw ߃BR %xtV\]m{c|piN#*uJAi mQQY-,9o]+~~dm^PI:_Iy:Okm,q'>-_GJф9'7vO?gKSӪFd,szu^F'-y[n'\uzYxg+l%{ uvQ ࣕ1=(ޛEΉ}&~駬ߍm8|Nc vR_ofGh.W(G7-9h'_vuO~zϓšNl*;d0:8LTd/go6h<9@9F7{?ԝ4"x>a~8ᅴ ^Z4fxb0B(%G#q8 穘akCtF8-O (/_ ~4._r:JTwr~>ׯ{ @p`*ᴭ2 Iy4 oΧ57Ar;4-~gX)[fM57x .^ePԝƦMɞoo{>#Pk\s/n;6OֆxƠdCA GJ=Ffœju.D׸țǓ6Wۀn=Mx7hE `-9CgԛwKMu>콢:lwEtݽc>ߛ.`2\1IB2.W ТJuGg&l'm5lbfu9 7]zY 3%V;՜]Ws#賿lxFݖݩIoьr > drN_XQ3⸑|U"CCJ+x 1<#R;f(1h". e` T1 @D"NQz}:K11r ^Wp*!6(7דӏtzZ0~Ho )pz8ZK+ UW9! ]blӃ#-/"kGJѣJCD- #rCu ؤ B_x&ܕ6=oǰeZh+'P^e(3C"iAY(s?$ Srk%N1~bQ ޝIl_O ϴh%O^htL Bz0:Ռ6P2K <6s4bS-@{>nu;rrjK&b S9NTi=5kb/ozy !ztP\l "LKK2A~< $'_N"Gーx IN쭈p   PWT@ `_/mt I%Nc8xY__zˬui. sA~ٞ ~9br,VfW ɝ[8>Gt*+NO.Mw Zviu˿sެD[koDĕY2N̲^y',,ƨ묯=z~m¥!pdxuQ tj}Lj逡F:}\3yaq(K^n+hPbE=7|sonYl[ףZ^c$'HC331C__Q 5Aim@z@ zcLAaTKI.K"QĪZz~?eDALN3N Djչ%RM'ୌVƃ ܠ8X I' x";O*;B( ڵ|CG˗^z=|V* xg3ߊ|uէ('-FA*g]>Ҩ*!U{jBNCS!6t$2[$/W=@w!Z;޶-X`[l9. E_[ܽmqvl.!9L-Wf=eډQř%>9pf8/!HJyᣀ,UHUڙ,UBGj]_mDڏ26fws鷛y8M޽ͪ||Gw*6"W^7Dz~7\XxFVtu/XھGT:|y&ZU}f֍a=~kZ=մ 6LgWϛ) j]^ʡ]OիNZY'Jpfreʥ2F}Ge0Y:@tUDb @(`qJ*kt*a^Per\tpPݯ]zH/*R8T0^Z덗}6Jσ>۟~S•t_/ZI_7:)=_YU^ەS}JXGM/"N5Frj??DE6c &OSWzZՄ(ƊTduYmR*:lVP;)c#"BHUXT)kQZ(emxJV1|m jn; ޿YexieU; *ot]P5 DK+c\A%[ FPz>@inΡvXպòֵ/0hA7QyyO[ƶkOmW՛z_br֢cX64OOv@4?_VC^H%\㦦K{Ռ'qAe?|-z٫xM!3<9ˤ%rܣl/hc'UٺF/4C@#%;{+C?v "o.˳H!6n`KͧWS&/y~uyw>Y^\;LH];i%[<Y鄢q~qܠR-yy9YWfϝ<2K!+U.u]1+n Op%\tEJuEڸQWԕ $ҍFWkP<֢O]WD!J O]?8lLu"v2ػjWF(it6"\6]"u]cj‰6Vkq}6"Z1u]m=u5 ]y[wyZh}r.bfJI)8$o.SP,.uWW(*zy9iGJboԬ ziӅRuޘW%'fNevGP@0;Rt_,kС@Ii5lVe!1&Wo./\Ұ._h?\o^j(bZcĩzմtU{)ӻW-OʖLƈ!bQ&6Cʢ4qj4 3uS t+;u{=ío˺Ri OBX fnfJolS>_ID'֤'"X$U U9Ru^Maobc7{HÛG 0x޼լ2\Pgtt'(&B1Ʒ_ïmB>}lW<5ӺZS/UZ!/OywoS앉Q0O[ùmnNRp.dӻø>IP@)Rػ3'ZrW[sk,Hm9?]l IkՔU ִ03-gcijrq:9YFOy !ffO*VVBbt{ۜ8x1M3˦h!DZs״2]W=u W~uՍTFM;J5=aZ 5+!]t*]v>u]5juƌtE+EEWL0u]1u5D]I˩1HqQ+}h_WLiuHaiNb\`}WLu:']1GW b}WL`ue,GW lảuŔ^BˌtEw.b\es>ubJF] PWNzztX&`yx,SaAh<:;u"uM8jz(|j <ep0eͨuez)ecvBF=5Q6GtЕukփ0"#]l>j!rӂJ]WLjuZvwVg+Ƶ庿au%8*aruŔzu8bY7\rzҤ7ШJ)HW lD6b\MiO]WDbF"'_,်b\s cWue"']1pF]kΥ+kzu}{5ķ23[&Ѥ [$/S̮1bsd0u<ȤI)Jcl[)N4M:t3Pg"}4٨:g~tjt[h3Z`ޗʥkiSZ`J9k[MN= u6tFS6)] unTX]W[WpUϙњVFiӕ+;j׬~Re;t{nJ+zuNyIW pEWRSu5@]IgČt%COcqm6AƦ+TG] PWʃtEAǝpFWD\ %TG] GW銀12Pb\Pi1y]1ڇ+rz2H&]1.fwŴ*y]1QWԕxvθ}/Gԉ5+0] QWN 2M&I1T* Vph*24k!43iL]LFieֻS!1;k}*ʞ1tU==Fx3AWnծYtZf+5>]1ש(Ԩ W2#]&]qKW $+uu]I6([FWT."Z'eb6huu])6(:;q+U׮2+ tEVltŸskWLue;]1 Z Z:QkWCԕE3#wŸrӪ 2+6Iq%1=&+fܖί՚<;݊l-aN)IBv)-:e2*7fr;tNZRmΌ6hPQ7`V{j7Z\&NQ7'!Z~tۣۧ>Q|lPf\MO(>*7k)S[5n[f?RhsNUeZOlб+?j׬)e;އtB+x"JPjuҽ*#]1qEWL׎(u5@]u}uqEWD }xЉS[=mQt銁}>jp"`bJ+Ⱦb`*ֈuP!( FW0]N S𨫣Sj@EH]WL)娫)ЭbcaJI7C7<5Utxltka,Fӌ-hUfJ=jz̩ώ]>\tEk(ݨ/EWVY/ũ0Jw)ldύ=O>HY#Ik8)QWd=+]N"`*]1nuՐQW5UC ;c."Zю0֮+iAYHWf+ƅltŴhSSJ;jRƨVb$|qA(% z<ۂ/7t(M]vy>~zd/g)fGW+|Ef]ϗ'Z9y9TN|ANhBR.yuדWz@fE񿱼|'ɺ0:"j$VE% 3RQ}?VqDˁB3m>9ϓIPr .˳HgK03|Sh!ſy== wW_N_UN\KKɧ\zr|(?SӒm>=a벩䛖LcOjsjZ0`>=a+].bڞ9HiƦuE5_9=gd`lE0S?u_Wz!Ĩ:}NA@|:W\tŴ: S6{u5 ]{ C1oZkvV'nu&d6-LmB֍!e׷u/f8u7㖉Io! hcdFMlvM"\ibH%Ma4}?aNonB7S11&fh=z|>p䏗/_5 xNAӒ|P9#w+~gՖcP !MF7oEC1w(/o6t[?_kJ2̧W׋%qFzcϛmAQg+T2g?~rʼnD/[ksl@}Gt>.n1;yMj*56ϯk]|䞺D+ZG#͛eB׈DTQ Q~ܿwpY)i`pm>n/͗yLfނgж#/v(ջ&̒UVEy9}NŔzPYc@jW)Q:q *,^)4P]|o_SCj2V5~[7] t^,gUWuR.,q++ :iTa AhnBA[ٵ]UXE((E t !RP ij6Ee%t`23;p]luxEI[hU4eU)kQQ@Oy|e-UtjQ-@Xࣴ;9$1HN\EURfxP*R2*oku.ͣti^3An.FX,յeU4@XUa#!դt*D!jLRhI P-bsT)kDTŬui<]YXUA5:-,Bge g"(WV#IR@K_PT`)톴k+-?RhŠC`@( #o X\QFh*UZK_۠4] ~. B3W]F^(|/硌,u,֥K޵qlǿЧ~0Nn \ F,qbv.)Y7O%z E#k}MM:uz3<%cNօM,ix_O͹xsޜ iTE8USI{k%xN4dEoc9\2'%d §Zc*Ü|XkQJNiB0Ƒ֑~B[+:FTVh+\.`5 j%6 Bb1C6WKr5j&ZG+yRt0dA%pMNԵHT!% >{(.I 0kJ!B QȎў H6deGގ`f#_f E US`5.0`-OXUAEE'{'0N4v/uh;۴V(5 I&jeW)qh4yֲtXBc ]S`1G:+t-cwk.4˺ F@B& 둔YlXkYC>  "ٻVLtzRLcsh'[ZpUTiXYi)@fT `6+(\d>id, |Der HM$P W2T j,K8JH&`AϫuJ+j+1v\ 7CA a ȸALAAX' @rNBB("2m ؍ttg-JC(]eԭ9+ƒ1 q04ƎAC\)L1R"%8TR 3k'T bO9(y3XGfұK!\+B4ަ2͙PHq s$eIj ڲFxwD"=dH_($_m (ĩ(H=LՕHދ0I]VkD5{VQR@}Ju 1r 9 W@B}M6y-dC9&dY\D d`niXEjE},B\FФi!:38o' n3RTiƬ䤡bLbb󢐴p8!bE!vn1PhZsHNh}T]Z01սu mz`-$t>%@uPt|$tፍP4dYt4CҕjIJ2Ba1%OpHv9Oŗ,J茸]B2kN$\Pd"U̫ FMʴL>hx L}h$kIu 5<o 7tXTu~V% 9ՠhKP-"V1hFy¶e@TD4X!?%?=r`D>"COXVXktdDPHc ѣ.O9{C^Z@mD%|uj З9ttIճD(ʠvSPk3m>%TB;뒄 Xut kR([|PS` &$˽e;VE̓BwD"4)YC c~,BJ35)JL K̐BUq@ơ"2ΪU%CYeXYvnjpM"β] ˉ6BkB5`%͐ڲhF& h%A J"-Ei*5.-=zrDu i ߬n$BG 4/^q5,a* z0dkR@4X~x9Ztٜ *-1dj2IVW PM!lJ''f= ]Z`M ~;-,zwm5EZSRP'ՓFCkׄ bL/Zl7l}YeFI*vàDy آsE6G=Tʍڪ.-hgr%5 2TA2 5@J'2(E7o֣bb V$⤩MkurSpMr;XY'y/nèIP(EeQAR܌EEH,Ӯ(Ng=OۦA l?:jNàe+J F̀| =peR\HO34%0d*֞ +JOQ!,iJ M\ѓFn57E!4XoCn֘M5Uq4DX%d,$*&4zVUi[u}ukZ7a~?xiU*/kDk5] .5]Ҕ /~Xmvb]piH},KZ.i_׿/³DNjC}z{3qq9/8;{9>:?WG&GJѰ>?V rsIJ+K P| mgiy{uu~l)}'uPeI䱑QC[W/fҴH`V@~q2~qqG`$4Ӄ݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[݊Vlbۭnv+[i@8/x+ZKw=zھaZ winE&FCr\awa˧ف=Fytu+Vܓv(~Xt%oAWs^ cv:" ]I2ȇNWR)GHWJ8` p<uEhtE(aztEB]sZP:VWܕRQWWC+Bk<]JVWέܕXÙ"F ]Z:]J瘮!]9iMtDWlBu5]\w+B|teA6pPPZHWAIqPSă+ksW2XHWFpH|9nW mݯ+xم@[ fFO6;SY{l1_YoG+Zs}Vsh&FT^~;*9 &6 M\04 FNӄJ Mvz 6U{_q;n] -J1]}nK啕DW8؃+{߫oVjۡ|h ])-DW}Zy(tEh]|tE(dztE;W^svE|dūtě#'toNGxպnyyDcWeeFxt0F㺶v>OOt_jY>?) >^\F6r]'7+Q~fofZb2ees.Τ(m CsY^Ihڗh|ɻ4X?]ׯ.a;7}~wZ'϶E},;ݝ{n nKJ?vz.|Nʽ[ݟq~Ηvml3;]l| ywuI'"L ?'hӧͫ+\1Ηz7;7hϽ{h>;=wgkܾ:Fc'H$S!( X[i=zrtdZ2EOAq_<zw>g8/?붟vwCݱ kȾqꪻV?W/og-˶6^ ?BJ]{2޹ـ[׿sϴwg16h$/TZ*>I> >7/< ~ȋH^Ӊa,~?oo@\n ;[ x uɰoYՓ|&F_ $AV:GG5r`8|>npRwIƦtx*] qڏ߽Ir|͈ /H-k"xg7J >],6y9|Ե]훟WoVݿ' 8&g>. 7 #f_Nwx⓷փ󋤍o4O5Oh46'IQI*Pflrj,7/ ۙ!lL/y=AS \;4i`C9BbFek UW!Ԝ6Vx24is6ޥ5]IWB:c ^::k2kzvk:'/3G+\ң Em_ óԻ'3L%j3JG138E"j wmH*#ެrm%wx]yuҰˤ8k IYI rA53 < t7JB1;5-W 몳}Wwo20K(#ࢩUW|bW.5gGKqv{ 7PV|(PL nN1F+`=K˘uS;ЉyHd*8yO74oa;rӈ Z%3azQL˺YmL8&/!4e5Ez$Q9IM `6 !P4Ѻ(mcLY G4ti{n6W?ƣ;2A$*ޑȆ #b&&)t*`L*umCopȰ`#XF 24P OLHG6'Q޳M Z޶vCW>i)8.MN>7N[z,zre}G[ݹu6uΆ>x>}߇¬ ׺o[͝urާKK=oT{֜<{_ݪ%4&]*BRAr1ɱ 9!c.hֻsQԠ }. $C%<[(s 7([* Tkdl&؎4f+XhXxXx!Q}YeO׋8ٯ4wy4L Ytdd4 1J]1ћ(\%%1fU,&Yo2QԢD keWEk^R(N rjn- Ej$,+з^h%'|nk{[Cʊ$"4꾦՞)51@<1=@$dba1օf=dfًNc [dܸO#c`;-&N5a3qnީt:Tq_~슈1""F !h&I ҁ%όItɑ. w:&S-Ȋ:PiY*dɂSSbQjjEcDl&5qWGY4}l%"6E<oZH1+* JDdc θ+\lʹcG<4-.xxvQpfgszۏ>eG;D?>P}E?fU)z;w'Ѷ{ 1,uրt"N*- {4_{wϰfJwSt:[W c/.*[OZ:]L0F- ʠ%N೰IONC4UZط2ts:k[Gg1*mB*QR MNhJe9b >Hy5}dړ\ra AA^Y:g0wb]Q;+;Z0>YO*/pQH.P1(KEP/:,F!$Yx= Iч EutBJ^:A(Eg0]Wy0)YEQRB(J.l'Q0 " 6a`zP9&u({,_g^ۺLbw٢H٬=KO &xLyIKL "L ّ1.(i稰)pQ"eO0ژH)Ҏ{MY|Q)C6dfC(EndL-;> d)On)VʱEH] j":bf.Wu:ub)L6Knb|4'ӕz#l"1x<R^zWB`Klͭ_RlWbttƔHFJx˚Rcҩ딚fRNI_̘y"Ʀj{[SovD&Un :L>tUtSVtt:i.pW'cӢFv&脶ޫND3<nsf+?{NڝX*݂Q9 JZFARqelJ)ˠ AL?怜Q ؓu#݌5t~{Ѳ;v3~߯]ifi?y(ꣵԧ~e/ƟgkkyvY'gO˚gO󑿆g_ň}Go'w{͋?ٔ&IYV&zOs^2+KV[DJq.{dPOZ.>Wu@+)n![gK..^kjU=] K\šH}UZ,*8wglCn`o*Wg D>pQрq~r6t:9g\/Oؽ벡kMݥRhj}Cg` CCvO;y N}@O淲\zu!jo.[* ERb"(_DVP2FhUĶV"ŒO.(d7c`᤯yQ);hO/WwxG۫oŤ%dZY[^]JVe ; .V`!'._}0y!%JGP<6k! :MTBUNP$f58]M!YJ,E H")i8&H.є%h~3q}s'p{ |Pg-'߭Tk, P 3>V/ξpic >*JD#0Z(ćl BLh@{@0rZK_$+}>rYUbNKJkvFp)m /Զ m[riӻ xyBi޿vr2Ҫ<~eҏ߆|$Դw÷gu6{<ʳ3.G髧oFuhz:)GCx$ CՑlq84?Wdʽ7;3?x2o}Ac0{3:o/~:'irR[ßS鸌d||tzEPGܑ>J V×]1㫹ߝ)17u&J3 Y7@Uo18>;g!'c5NG:"XpٴW-}=y[#6Nӷ kض%ZP֪ߠKAVgY? Ep&S g'=7УSs6W tܾX?vwqCᣔΗQBqG?og{7W!Wvݻ%\.I3~ڂeIg}IlKփ1 duwuuUuկ6 jzeȌ OG.; ~^a3y+06=xE#HŰ5UBgp=1pԽ[ilI!2W Pyǟy-iol#3k^43ؚwu,&7gWoN5rtO٨=R{Z[ U`ʵ LVH(YR Qţ\RwQzeq,HyQD 5q`QFh HqiH1%G٠THcA,(AaB8ū 4;/5vjP/)g0/8ݖrx3`."Wc;3EDCxv>f:N> WlN}oV(iߦ Ad|.$>΢^hSӈ;$;,1^:F-V" ~sRwyI).Om <)пY_8*fэX6.AAC.񏾄\E8E?T00J eL9NzkKsR$tYOZboI~ZӪUv:gW9{B$SQpH".l ٧-{c鯃;O&}?tA+MV 뾚jܩ̾AˇJ_=#T`أ捿zR挫zh ѕ? tq»}~?GOM|/C`0LL\{=m3+r~WgK̋fYnza(Wl3raL'ab]'y]Ruw ٧NSqs|#/ p`xh jd\^>S@"]d6tiXlJeIRO{'v9x/l(\=-ەe)%GhܒӌiI `ǁ/ +/T;dPS. 6"%G:rZr/K!ctyHJ¥)`uj2XbBv# q׍Ro wIF )QSA`LuG&5HTߵ `suס 4lJa^iGjdrVk/;^s%(J*RUXgmʧ:.r9״ oCui ;rew ʫp_5ڿ>Wh4 nxGF}HxU˝4f#oިZ}kAY+b} mal[M4-BA5iR/2uǓY177؀^E"c=UDF{kTa%Ň;̀tT ˣtݧ%ww \m5ك羗 8d$QPrI Y*;fAYQiu(U#vr6UQ:? FF$9z(W!`zm"sG [f|>TM=Q"qlcFhyU6پ |4 %L $ 4 i#N(SVa˄KmP:S5Z`(3*h2:ڀmT 5C9ϯ'/< &qU K}ͮ L6E[cWWr)y_~׀蝗IkF\ѹ:,Hpd9r.1[l`9,쁒F盷5LxdE-F(6l3`*Kpt[(QZ`B<UXo;cƌL"]FL&KVHK&o m㒃=dXcyèK q BOߵݾ /B @ 'Oǣv]>Ҏa`I9ONW#å5ycN%,E0Q11pE]-ѹOcm'`.jXJ7س7&<5B JWq$Q?ReT}ޅt6QO׵ʃC 2e'B^#l7?ARiOA]U6T lv_ЏNJRiU}QkE6EW"{ex-/y]Jt}Ӱ(˙I%+gxO>za-D%m,|hFe5NOQ.s1Y+10shCu^^xmNz]~4BZnu?:x -Cr7~z7ϛ<ȷ͒RWޒVz5c5kκ?kiڊ˗ۼP—ҡp.ZoxЄ7WcDX+Л jT*۰w'٨9&oF\%rx+*Q{kqlxߥU"XDoE\U]+tyll~z%} UZ?g~X(23g핳7d~f]:*/g=W2:7jQB[]\0s,W$ޯכw/;#L'4Ы*KICoMw8OU܂ B.!ߝku"c3_1[%Ҝ8%^ө4ns]ԓYGH(򹖜N)bxЄ$kI>$/zfLrJ-/1O&zG̬F3ԳcHI%8Ŝ9s˭̉uX bt#pLȖ ++/ WMqXS,{id Dodl4M=b'p[S&(UDcwI! "͸\[q-Jcw-{mZx?0e DoF\%r՛9j:sWtéH1𫋫vW֮Vڒ#;[+ڊ]fkDoF\_qUV\CqE8hz4[łp}^ RlbL&g g|W{a1x}8 ~/uZta*' !:#/R;iwfb UY-<CS~Qνi+Xc0R} 1B0|!5/r,s2Jrί& *Ϳٻ6$+|^l03=sL=WmԒdw}"-E%׃eU8y"3.:#V@Bv 5:9:OX V^Ƣ3t5`h%+| b„ xPy4o)ccZKCN- e@(5g0. NZn^\.tnOCUh1lIgHDdt>+*JN&кtQBdLL(uq?kR2BHTZet [g g>@i)[}a6ӓ;bF7WS~OyP8},mڵI{xùqd[5^ڈ )0`c22$ -zK]&BdCAESd4ΖJURQ zrtѱ 2\ۜ 4 qfXL3B ̀;[.>-U/uB'dXpĆ$>,dp`6#v*c)瀢meS5Tg') dMe"ߒo[طDH,k.,kJft<nmv@ާFgeD!2@՞)т1@k]fz ,==%#6G6;@U5l]4L ӢTg}cmg w(Q ~gwiv8=&QCS^{tϰeJwsl>և|%ʠ1\Di:E )¤JS@ef/ 1*ARrV%e4)P.㻑䙫k}wdٓ.0hu]R R*^Xg%aJFQdIdvV t$Pz.Y9$G7/7ێO6{jL+.T_&*[_GE.t.ˤ25'vRPEi e2 ARZl bxf쐫zDiSҵqcfiJ+_Ӛ!^+ncnqӍ2첎>~['e-i30@hovrdU&,[|ƋDʫR+vٳe.&rYnw~1g0y3d 25Ӑ}q:"NO6)]JN+xRHM(OAV$ϻ },P֩s$K«;A"S6CHJÆF +`d u%{-K%)r9 %8Gm7(<*+ZR֤n%`į3nhݿGfP$orCfqt `iU`˒7QOTB2;2z%M>8砕= &D|a3H)Bˣ*84x]PJgdf%(DjdL;~KARAlJY&[ I##sK-]>K@#@e0_P[C&gr6_k,.D`,pgB{㾖VJIgl9[u ՀKʽWnilL@YK)X6(CKjy)jp:2/2Q*;"X )lnmo۪k, ˁOH|9p=(k|=>[ִEy+d[HV{d:Ћd' dy9E}ҳZ9i~2:Uػ@̵"ߛCľ6 kbIzU2vFГu#݂5lhN݂gW?b.YFt{ݓ@}8R5>iw_{HJqрìJI*_ $*@R|gw/c,K=wО PJ漢ʪAHI!)zbbWTL1 om*BX"._`s6) !F\|m$P$%hX,Đ,%AIblJ%]@cNQ|lTLI !͆X>Z| f rw_>Tk,npCM/6<[w2[p1"rxH] \|L h} 7O͗@>6BπތjG< xYg_}tUx:/l|߯)PB\xM6F>|y%@[纰"O;w~Vy< C%h2gGQ+VbQh1OHb)!UpXi H4sm/ͽxiZ@Ht1[ [` %yH ^j56OԜ!RYE39.| (!H(J+FRh-\HVUpӊ5nco`JdktYGʾ@1]w4IvC"R]$C\LW&g^/yev~fz)]f2Z`68M6}g ֝}l%9->̜=Z[v:خS%Y,Ge0gΤɣJ@O~C^d~2JL|yw1sm.q>8 AX@9l ü]ھ2f6A$/2̂Ui2tejV{.Kx1*T>[e_]\/_*kL"zsYyO(dy.Zʍc4COCE2ageQěr?nG JMTw^݂Ne?6aIՀBYɝ>QJhQ3拝H?[:gH}ݷXM/er̢-cֳm|(: VV wVY67h^eX4%Ye}8(u,շ7hX|N{yM# 8l|,=<tP0Hŗ%14/4-6$w%ZYb. 쾫Gva-E$mŭRGird!59P͘ z BJQK ;B`#"Xr#g&&"beƞh9pNK K6թA` Ezt0RGPGNZ9y:(Ė빹m]ΟzAϫf |!2BroE|ߜan2OQɭUפ Oc*A)[X&WyA`LveevGE9ңkܻOmS_|dZ)kuSP0vY4a#A2 u(>[~:B"APAha$qlL(Ƭ^JD!L#5ϱRNq,c4EĖPƽuH ÌМ-Z!!2]̏]X-g1>_%>zpO399TZ7 0,t3ɫ)fO=vqX`f[naĞ\h\dr'#%T2סݲ?c8[kپ ΐMtk3a s@[ʢ{ͶP9I͖ U`hi)1H§Jv8[v[YyNƹG־ׄ2,)wcUu\E1e5BQF.5){'\[9p8V&he,{S9q^6._<|ʟ[B;c)&Ռk@Ds f DlDtTYT)NT o-`M|#OBLSwJgo'?O~.1卌HrJJD qnHEk;Tpܺewߥmrk ;f;WoVp>6>YÙW0cGpZR hB`"&|}@>ȕszȋ]|")J!'9ᓹJS!HTe<?"z:nQK(iV7[o$:/.Oe`*Ѵ4W+V<-]]ΣtGb-5 op5ɥg\ɮ֗7ɼ4hpùt87 r DW6{@7n{՜d=%oރ\ $%_Wtz݆}u1il Jh]B<ԺZs8͆z}~6sвe[r~Hfwכhc~ȞYqmd+F&a5gß5}h4Eŕ+W<%NMvU!ޖ0,mZ&;T,MY's$85ا Z˜8O;|FT@7;L;Fԟ߃x:nRvB;۱b)"\^dD*Rgr!ؼ)`7V ty<*d B Q(+*ʈXMzU{٪1uVӒ}q+"v=gRtRDRI/3Wk 0l0JjBb\<<{fupy@XBbIE?>Tb:~MHMYhZC{/E8ɽAkgӋz|4I N׿ oeX"߷䡢>W:.ͧpUu8F.`Rc4W4nԖL=Py=Lwk3æ*?K?>UwܒwΎ-P.m(NT%_᷺58"x/q*0Fƌ `(Av2%])SLzi$y|t <8 ;{pm=f`eV{e>h*h}><iNJlݐO5(]ѬhSj^Aܫnզ;ŜF8@D612!e4>Q+EpsZcIw u?wNpqdz*AGA1DԞۘk'gwrkvv9_/g͕V``%4Wԝnl"}رbG ?\8p#zĤW%/!7(jɰIh5oHQņFWO(Iz5BF'=ʠn1LHu@wzKϞ@к$i&U\FoЩCiGR!SVCuJusSnf5,K492V6(O J+Gdӟ,>ke?t<\TS$DYɆO2s<}ap< C2,Fza|A阦DoHX6>NvKON lbxdR0\΁$RV1"u`d҃dQ*hRw:O2".ߎ逡/XCTjzFhf6͟8mK.@{+y"PP*sTTȬCn0h $G%:SƦ{c| Zm(7%>Nk?^YQUW_KGeO Lu'd7O?uo6ZDP'Ʊ׶bF'-`^$&?El|Z^{r<q6|CF{GR]۞(c{/ƣ<8޿/?/~*v_ЍB~s_o8Awn .NӷQ{;-sj\ yGRg;bVbƓ6g]tFEcL'ٌl0s'BT^}MgpƏf(MƣDپ2hfo"x>a42:<#X E(JKisܻ,^g .l[:r#Bp*^ϛvXWӍǾo]^xXKrJBI]CeO+/<[dZ+޳ROۥvjel_zU٢g/_NjLb@KX8/do_/^_{Ik={I \g9E+)Km/= [;k\%?M}_$ ?j^KZ:P_OEB:) ]6IbxW=[BT Ս^.~X.[\u{^X.A.B{e]Veٺӓicbz%d:$\tߞVX)6 moab4C#B`7Ȕp\C@p!#/{ ѕ=ЕJ>,kpzgr?Pc/sZg Uɺ윒(x}ID̈́1ˠ:ud֗mGJ!WJeo I\4V$TrJP{j3 =B5 2$" 44ܽ׭+ttqsRљ]v'+K(x ABLGB"e#d,!0ii.XDdTB,#XfV#,Pɂ;;+e½ɑ %"R; -k R;)l{]Y`9B "'2z!a( YLD֠dHɒdee;&Ύv?(j  il!N`4=LΤ|FÌA%-ωPL \>_8O$#pjkrQGT*ZO9Σ L uT a'u~ҕwDaX%1+FP ^#ў(9w37DG8Hj:g$Pu#?K4/f+P(!})(3C.`QbbgmfDHijJٱ_5Le[Ie"# ε4F3N)x.+GT8cOJMg㊌5a/S@20SݞBP#=7QJ& O5Zks 1%'8/)ӖKk;$:{HcO4&0j%C>BS C{Z%.o0 LP%/!V*G/!+MJJI-e@h?BnۻՈi_u+ZLj!Z 4iΣ7Fg#J3"xHIa%5!BYMb_VVG]!S:׿=_n_Iq"9d\T,shT)Õ-Z8ې K4.WƵx{.WӥBnrۮ/d*j4Oz_ة:QK.*Tj]@8( >f3hʼnSz01[*bv!h)JY k"2Y!b&x#AR% K}bwq3izHզL*'E[*:ӄgLRiE\9u漓LF-k9j/4‚G4>WQ{/?#aՒH+5Am=1DIh,LIKuw&n޹[!A**hc:`Fڿ)_.-Z,r1 \.@V<=ͅxB ;?{w{}@R{˜FĚqe88WUYfΊ_[~_|oEuuC"^ۛѩz`DSuy T烈v/C<)Dq mM]D|ozSd6h T4Ūw_Q$A4ܨ[:wYG,jYV g' SfY-h?6cT,Qj._}-FiǶ皹" pdxU%4>)CtV|^P0!"_K0H#$ %ڳkdilHe; 1Or4|8爛)Iyp{Q +-ְia%#:kN+1kw@6편t"ur)\ІH_$с>ipT!PNVm%AZ2fqa'_;Px/ⰸvYA~Fb@8ȡjWmS_ͯdWֳ (9$K6㨥o´VȫHuոH9|X푳 BXʶGZ/B sUI4܎TsmyɄ18NULS@i⩖}ʕ=ŨÁ4+u~@'0NKQE}LF%6DOGu+AjgiPp`[I9jQ~ oPD#u` `A牖6g7 y-9|HIh<8bq?| ńz!o h=u̅y܃3>kA ǫ]U$ZI:ë4W?X Z?G?<S&7~{S*M޽N~K898z4z/ u.q|ݨU|NWQ)⺨(9zKP1ZKٛqEF/>tVB=\2plo[Aփ<o^'6ޫdC#:-~͕ 5ZTۻpR S|A>sU4w"Q~m8y[ 7_Gw7'㢁+}O=:7I_i> |"zK̗rk^VjyV5oVtoU 1r}2k7NFn젤@|['SZhc"1MP hDB0*QYíN*dVugtp2H 'Fsjy PMDrc4y%%(td5+$_9%Z "Y]e&\-hMntT6טTAeu;W&<ʩ3x["lsV@"\;Qo;iY IJ)*uBG܉hMMY Kh[ +@L TC|PajCe.>T O1} F?SS[ <1RQf8ϣA(sEɽ?o3T{re舖koZw >Γdt~z0v n *rW[xz Z&Sܒ lM+:cխ,sNnJZ&9Y7v\ZKm8>eS{Jml]buY^x䑇<c֏f@lYb˲nx9`__UXnd+_J2(JAWCJ& FO'CWV utE(5]A'DW9*p_{Y~h_ d+![8!"B𓡫T誠G JթHWXRWXQW LDWcRw-ҕSRW Ů \N ZNW[+5J)@ ]O za~?[+$?NG]dUAkUAiMGWo 4 T6ɫmZr>7>@cBീ1e4"@L;2jí3ɁD@%*~z1vQNVagЌ)uQV)E `'3dBѯ+(oqp>Ur@ ]d誠EytUPJWCWMΘF_; W+uZJPQjR]yeOXUAk]9CW\q >farbڍ',b ]9pMm8Y9ut;e[Yȹ<ŊVUReJ )tL}ƣ = +QuQCn1]?mޯ˂\{L\?4ە3K3ӕqSGWglb#SY'780Ă1R*pFim9K'ًB %*.]M}R*fBP7Ák '4.ž̸\r*vrv[X ZF3\2[$:bm9R|qʳߖ.]sv:z[ ~ӏY =%eVvs2 ]Q瘪|aԫgR^<(&a}s̺V _C Nސ..3>Y^xcJ{_!Û`qzqȶaqJ͖7iqmk ossfoIJWZE¸_.sR&:BpeN Yiqx:cx +Eݸ21nnGẺ3V/:t#"& (3,\Q*;F8]RWS좴KS?npṾcBECŝy3_L(f_MKi}mLi'ˇַ v_nvGv4JWjaW:.`r~=@|Y캰%[_[VCk]Xef6λxT/H>LׯVBr!̫{K۹~EHTAɯ?/1ܐ}[\~fw~1p̤ ^VNR$bd>K1=J1ExTApD^ڃΪD^ցK depl799Кs>\=58:1֏ǣ4O6[fMH2gQM_g%YRM,:?NskxB㏟-aZu7~=!9C7h|(P{SͼxJr4Wls"YL{鰔\Y\SPF>c.FQB)f IɈnӸVW7^>Kf/VQs6+&*l4ݒn%N{u.LӬ_d Tʬܖglkm c QCСC**ѢB`:'rc-~&tJI_t¥6q#XGj/_$#$uX$bv?GQ:o1S(>{gE!yub~&(:t˴c rL߮6UFi;.1t|w=o*2c7.ݎd%49xsn67LU*ǏsInW-(YtU:h;MFu]u[;믆۰OB. RW.I]5wgO9<"Ġwf!(e5gzygڏy%n{(ͯ.'B~׺,2E"ߧ[&x4ş'}8݌`4zęI.JNM6+ڊk2 Ϸ]HOY,Zdf* W@^Rn=g|ʆ=k| xSa,ma6%6{P,,yEr*9: TvlZI%&lSN2僐(PSš8ㅐIĥ%S>j 9ZYkpbG4Gk-V2$PB5ߜQmwvqB|#vric2(@9K[ƲR[r5[41NEY[QkAJ:2=HL1 eM5 q⸸_ ixesx[2j99f`Af׍|'T'˽:dVeiZDrOŭBYM#͸S#km ٘yg8i)xkb&4!IEIj(&[5پ/{"bY,mCD# քSo0*{.Ӱ^_FOD1γ^onmMߴͿG[§ >#ճ7VT|K)ϡA+PaAz@J kBjPIWTZY.MRuLF4SzfײHJ[}-h*S+ V^zKKr{MհNW]ZSY.)Қ Ia4ws6bLԧsYDZEzjq_G(uv;5g}n~onYWk׏EͪmU=b+I=R)6x2;EVNфV8 Jޝwh pr^em8W-]X7LV9Xn DhvvAwAn:jeCg0)H2al8 @rB >I']J.RB Ke)ڌ21&d\+Dn{AHkp^B ]1WtYpcIUܗ^fq6 n/GG1yXNGTޚT'jf2B^9|(F۩ηcS<sN"@!TC`zm'<^6Yd8n'wݽ>":GdяOzYZ,h"X 6&N]Ҹ$yd: "g92mE=X"l}P>x񙨹Լ^kfB58wD1R6P=۲c^>o߶;>\oRqP !3U@2h謶)T2D3:fc5`ʘpL0ݲ<M($L$63aflOƅYơ\ZBqrڶӛF\|NʓI F+gl읂ĸ7tT90GiC*&(#cJY"Ƀ]-+HP=9NBBEz$=6onjmBKXS&<~4+.汰vkq(kY[w)]d!Y]-#'ɢ$#`YkHxH"U36&?=껕V.̥.m豻T-YJ[rzǨ  2*ku.2I*k4!::Hds|B$O\\vOE$c#ma( vg,MfGW4GH555ƜuQ X`4oi"D '9-UOD!H3ʤ &*x~F )9d\Qt1 mӄݳ8,+tOQ67<9Lp ,gՇۅڭc4#{t4U7ᔟ 8.y3O*6{@!Fmm$2cC"h┐F; JQ|fVHF1*G.,M)et!-S ꅾA/Y3g7ooXs~o ,^P`}} -;>fMZͻ떶MB0AjPCh,<2H6pWS&8&z;XTRV^b+j6"B)TD]`Ld%cVG]=svR:$ |}^ Իƫ/k>`'#V[^ .%3$'QYj{xW-c#0jrYC:\rpxJTBM$Vd, 3;ukm>9p3 }J?ɧ7^nC[;x,;o6sG{OaEI09+ }vƨ5R j<H[}gVia7 zZl.CC7 u'Pud77 󿃐.x5/'$/}Jյ*AT库Hc]QZ ID&AFލݍߧwmI$5NK,X. OޢcAB5}-ˮwW.o$m8TCR>jq;^YoT9^H 'T[jI嘭xg@7,[LeAnگ|ڲgjGYG|O,^Z^/K^poVn5oۆcu)@gY՛W6}:\~mIN8l0~[ЎGy>&߆i{dvrn4y~b+[7/9Oj yեo8+'l2my>¶ݥa4sǣPZ| fPt2%Ɠqmܾ:i F"l&?4svt\Ftxi>a4$/t/zo|S_ŜG;d7n,,%ゔW?υi<:]NԚWßWlQQz5Csl7]hkU;}5y_d ~雯p,m5Z@֪R7k,D柎'߹`\Tc ީY&UfWT4Zx5(ْdw6{ٕ~ QlaQ`,/V,7 D{SbH I~ۯyyJчuwY|c%EW6<娛z1vƻbE ʞ[n 7l/qgvYc-BtЯwdVS)bP]usS&흒`"@,>"*de rHMI8T0~ţ1tctMV3p<ՇXuz?a\T^ ,Rg8K@ oPL:qzUS9iڕCݙn"o廨w˨ҫrs+W^lm3zy!\yr3:s/uM/P/.t5lB"6^a%bO506,[%DFtetY\ltvfڑvVkF16$y~&D!TvN =-FX!xADi2`HJ>!qt8jk3֞n<ߌo"2OT ESĂe}2 ಓFDau($$e)݊v>rIE:SVYJJBF_$lRC@!0 AHUytg`IC4:AubMhIBG\&Kׇy箌&6ͱ,\?ł_Rz58/(F|[z?}|N^דGw߱ajo~FVUeIi?I#E+9]{ڙ8W_>VH%E sYreB汋9!_< P][߰JS^šӯyϋg;=?W7X=]g^3H\6yPSEjk#ϬNgg% >;\Tw 0ȢɍGUA7j B7=36֣u;z݆j=T2%,AW<`T!S6(:=vI` E jB :#G)KQ2:~QxuWW ;zYCQxה۬ Z!C\ 1*!4.he稱t"\!6&m2|@$Y:z4|ڐͤآ9j١,E-> d)-E 樤b[ + \늵Vě2l44+ &4a`I2LR:"τ5[>$"ew%tʡVh@%Ufgñ1m#Q ߲ؠ xl.d7K`}Q3WR{ۿ(դvyVݪز^gm&e=fwBNS:XS42@P AWyNb߿wG{=`@+͇-#Zrldu$DȔ2$  sB{-P0fvKPɛ#G1K-2*mr!1 oF VF:6Z;{?]U5Y>Wxk#\ P!j^-3O^!wqaeӢK ǫ{« WJH70fg-@ji@uQvMO4`y*Bdwd́Yc Ț0lvY5%$I1(4D'WfHZe1.NI&KmؗmК)d[(' QrBFkJdUEBk$9-VQ[A&m$@]ۨ1?ZꝚQw qK)vkR5@ N5ԁo& =Ѻ3vř<=eyDk>ȹ`0ycH#vHaNGFҕQ =y;7=$}%''G^˥+stDCi1^yLq^ &KQ"Z΂)ZeۢBB2}rΐ8&%. WmF%($RшYm8f#wW{\pMgtBcZ$(!nz:r`Zz'-|wju?}EDI>:[ݕSvz}U믓Clvg7Fu՛z6|_(Ҫ #J7K1ߍ;W՟T<6z7[s9eo{) z1Y+=&>lQj$TYqI[ z_ m߭OĹỲ[7>8i\h3je'ou7Fͺc5W/Wkyc#ǫ)%/LzT0H+*Lޏ\'[=mg- ]Gz|om>ݧ_}.EeɱGĀܒC qPחTrMuJi#KF굏1aT bb}np DOYT=2&&PDjչ%RE'[ `I*1i.T9yʸ_H?}N/2ׂMJpIt1I CfC"ƒhϺ!9&)TɈKs01Nņ2Ϋqu0^ndSK{YwYz(@vEZS 1PKwQ߼3&H|2BE:j=Y1(WE\< ӟc;.ɟ3]lpࡾ^=~Jآ_HN`Sl",wbxyu59YeW~o?8;)֑BS;L`s*}0.{^pY0|Pѥ{)U>$vΛ5@}fvNw^Z 5'kMQB(bZ]6=`1&(AsDV'6J@fVD'$Kmhe5^ Z?e̯ T+aM;8jd%j:jD)PLJ!,~x ȵKAƵ]{V8tBrׂ1RR8"q hU&WcWZC]\!j!Qձ+ԫg,܋`AӊQӈQZĕRE\=thLB G#2"2BB@@q5+i^TI[I2K;eě '9lx9 m_ϋ7Q u 䳃XV\phǛ`I8=s{"Zc,۷R *i4X 7?߾]SZq6ޡ'J e9˱D@ 2IpLwx*qnT$D(l3zK5=r_>bZ{O4\Np6nߝ E>ˋ!w'/{ǡhkjwqA JQ8ZPHa QH'4J3cuTSh 4Q`N7OȖ0]EO[`Vp|jgy=[7ʼn&)U@fUT?fl7oyu8eI> *"XP `A,(E"XP `AܷgmWN+sDWH0U&x3\Ur%G\}=W2zZ@O i=-r@zZؿzZ@O i=-zZ@O i=-XrzZ@O i\4[I,zZ@O i=-V75FXjo &DhƩH5u"^TmM0Q &ji^81p@Za8p$*QE}LF%6DOG}+AjgiPpQ 3~r&sjGQ Txb$c\k\ "8O`$Sn[WP/}\ b_}!< _鹹%}϶h L*1E_}"}U`w}jjͻw=Ek 5TibbۀL\HHjEO$ݣټ[زu5޼lwvxJ˻a:ݻ=C>(w ]O^|7z}G@5ݟ75yK+_.7=Ka\\n}n['mS 'LA- I] d{"ǜf%ǮMƫ>.\5DkǙ%D F O~슴{'Qiֳ‹@IH`6ӶNd BJBe`\kzi^8aխxhHlsTpq*An>P?zaء 4#?-LfRES2#>c_bfv8*v'ev!=ui-XM>RW- ^}2}M]Hߠ#xs#(wzzZޏQ"%9ԊYRCFs$&{M8/#F2A(u*%8g.h%ڂ4I/t̶!X a0Vkt^GhO)k 1q]6 &eF}ʢ0pF$yCiFFC4R9XV[\.it*Ř[\T OzyiѮ~WfanWi,K B+!|q/ 1NO\?]O~R^G-KK~?*,0W.d1jRmCѯ0K'9}M` y{Dmd#۴Bmv<]-KvȆy_RtT/tw~]Onug<=zƤ uH)Ze*II EaS.e 빱1$~VHψO޵6qۿOgU[$Ċʏ65O  HYqgwAU$>sf4sN_&<΋+fUDBͼA&h,q #g#,[_if_'=|"ɉ&b6ǎv1\wXZ&6ԋZjKp}ݗߩ:%;ֹ-`*הrj0Q1z="IaS3M @E4^Pڛ%x@.gχ?>HAkx2: # "Ð46r9R~lJ 9Ǽe`_P|+Ӊ ȥ@jF@$4Ĕ焤4Ѡ]V N!$HNPƄ)|.Cǂ$l\ǠR)lB*َJ1,,b*½b ̌&nQP~i-#?ڎF1|dĦe )a]9#sM JS\;jxed ^2A$1 p4 \a8$6plG $& @[cB M饎RlGl7!桠v1ua=j vce"4W#eQ HO,2gyJ(I*m=8BFfH=#+lx1 ?8M2m qَ͠~P슈0"{D8rףA,45%tkD:QGʹnjDic(,*Aa59יevF*s땩@旲~8DžTh#a"S1)# i&bW&ۑ䙻{}0#M<FJ.o~PK_}-"pme"Y/EckIS֏hOO[X RcPϥ 1Nm0V;\ĭP<C'y4q`wAx'HGu:Jo(e ˥("sEQĦ#0ӵ MKfU<3 .mUR껵s#ɽ0gc| mj?KBxQ/fQ AKDQW'Y-8 5<0O=IzZ/qh*W!< ɽ^n7a>NrF6v=z"JRRM. e3KP'ʵS b \Zsu{;'O)@OMF8{a!4""swG(#0.&b[H% WQ$)lm_f[o:CnqG2>FCqXY"}~0\MX{{R0?g/#i+ Q):!N+xjS$ MT\DȾ|3oQBʘ5L.^JFm%OcL6oPfQ2WI^Vj! YJaP@8)iƝQ)b%+sUI!!> ^̑wCu49˗[S%c29[G뮁 75Xs?zG-r5* @px`=~e: C遧 Lfw'kG1IDp"w.K J I ph`-쑧dn(ٍ>'Fg iE4pfJ`&9EH)׆$feTQ_,&_שŚ+(A2fԽHQm7OFK%@<pw|Kbyk.!)QIei|q,G⍑RqD40\KB$Q2a|9O9uӈAi --`kh4Ԛh(BQE(]|7Vy.xuNuyV//(Gܲ_sCyj GhxΡQj~>(}3_m~Y-r^УP5gV86IM5WE6Suu𓪹D; 6EB~oawq4yw6sfWn٭Ve_^_<~t>9aijlEXt8>#RI#Y`0~gdoo>x2;˝F5?6iOgOFbO9¢&aO9.kRrĨ>G|pI~s<!q7}]/߷Iy7~:'p9VZZI-||ׯ_i Z0pFp]EsTP,j4b8T_W/&rٚ͡~Ak/ǖYf\d!"wXE7!Uvf0焘s 4_㺴rŸqJ7Jd[Lٱj};  xWo'g(վ͒4|Άw;+^s|#l"ώ,´i zx8C͏xP>|l" A;WG2hJYӫPzR}L/57uWn2#47?bgVQkBAO Rߐז3AYAAoWM gcfa%BݕK})r7Eb8]^IFk 某^6VdB8][WeObD"Gmx2q\m:5 rC+ݚ^No?_w.Q @.*B~cOWW 8#|gqۨqٷzI.i{v> ;o,roÃݞMWkE?.6ؕvP 6,/kPuKͷ_yEFϲZ>9|6;/|].@^Nй!y5[w|T7V& "9UqyVut^܈=x;!S[o}BO!`qk$ֆPWRi< ԯa_BOez1RSkv3;8"wS)†"mVp`4.0c" 0B^ %iΉe[rD#!f1ːi'ViN~3g9cM2SBaB,eC>e{W^l%C^d>sry /oP//t) DQ0IZM*`/lʒ jBWN[,rPPq=8sЄ'mT I5ꄴ)$ѢqfI\31c I3Qk \yϒDoQEN,qV̜;YWya\ry+#]B xĀ} :xwjQ>&L |M>YHEe*>JKOHb2jR䙡A ,!Q1dsZŨ_qR?N Hp)R2\q$Ž N→)`Uo"qP|bL&f"0p )QfKǐ&t i$VZ4!# ЇȃTV>͈mX7|C b.kTDF`J#ԙzUNNyg#raZ(XJTQ.<.X'O%XzãTzѧKE7LGf) A'!;(SGǢ Rr3.l{ XA'kP @I 1C%tI߃yj1sN*nZSLq$E-n'鮮 {<.ikCKZQ_5%ʒmJݴHkR8Lqzf#z╔ZbhY4fb99 &r&̈́/018jLqt%VCS9TЈVJۊsQlS rH/I%TUOi-h@anJ_RcK&˔D(Rt\cc5M3{Xc#^ĶAr3-*?|9Ӌ{9E8#vO&&M)&L5W^)s,r=kʯ'틿f5n[$4.VZdT{kJ{ >\ V؝\BNF4ad0snR`\~2̜(tHO;r_/~#u{14Ld# x\O3y#R F%KXTB^]%[nV y*!;*RhછX[tpխ$j3=".``3\L̇N{w1ܑ5WVkg~C VMN9|[ Lx4" ՑTS 'VEo֛kwշZ5~C{'NW=?)wMoʯu!]o듟/_W̿Ok?껓9vfyrS/.ZOO*u_la>\}>\bwQwv5r qH|C2ŢZg6[1L F7Nn>eň`ZcVj- K%ǔXu5]zp0sܐ dPMM`~9߯^F_{lMX\PMj JLRPVkM0@hzEJ]W͵a tOTR@fJ.W,v 0?9X8?%Z?O^ zp--V k^Dzh(*?!C})f6ڮz44} R[F>[MBj EFϡR2LQ'.YP詙3k5IPb-oƦ 5v02 yƮX8X{[wzۙ U{\c~7 S\,-NW8b۔[ĘJ5P KdTjMhZr%d)a,um^,`æ]+yKevJormj!޶=ʜ;;)桠0+rcQ{fyV!gUBO_snœQ 0-JOPA4s7mȇ5,jТfՠX^W/h6*&kR]Cݥ`9*_v`<D""FDqF3.뜨a,.S$8`i$GVFĵ)jkc}57JZǙTRfz\5w .ha<^Wꤸn:yɮ(qQf\q^^*6_8t5%tTTw6/1P?>p`:X<9ߓpv[}]U}#jzX*a22EpLC2qD'T?Dq=6oi^N%7g"e1ŜS$?I4` os~7έ<#\nv&|ouY۳ O󡴊R`&Ag{nu~cH6(!қ]DM6蘭SaۤlͥD7\\ɘ"l(b4z(s',]߇#Lot0W6(޴gj8gbG#RqѨި·cQQkިfMz4:Ag|NӌtbȣGČbDN'%4503eRSf0Ҝ}INz׾Tʪ lcq !|c_-yܔm'Wc]\õGQ'TO睟wɕw८ꀾ?9i+~\ni旳ߝm(,綫o} bpMmЭCݩGR;t)$cR)t,uMnn*&2LoNEbJ?UmldJD)RH̩ Nѱ] JbHfGD ޳ʩ,Uc5K\ac=9g}!qڼyN<;K(gz?l0?ws-f=SM<ꕳQ<8_ML(Nll DK?PHON ̴+Юcv )NL0$ حԬ[l&sBJsPZA&.K؟E:W%UNbZڐFEɣӆ2b??rF. =͓)>|t;?p/M3ۨV,Njܠ'mfS4Ck(h9vhe[GSEW4srn>9rEo&8fobvNDGG11J0PcB w@k6!VH;4΢ipXuA߻1 Akb1n\k`Su&g2Vq7fN8MEu҅ax\&ÜS Fg-JVe*9Fߊ8%=Pa. e`[ A,h ZLQ|9O ;)SsBΆPw9We&(nxJ6J[='bgzPtFĎ:cuF?΀BLTJMWc( IƋPR]!lUrm܊'ij6Ev M`l֒Mܚqlq/Vra4"f㖌Takt n%.]5fn}˴z˴>jpD;Y$&Tl"  :|]/r9^T6GT&h&nNFL%\2a%)x@%)8lha93 EД;Pv ZVfC\nY>ΦIhM6H>KTzI[(YJFYd}LB}1Of-o:7('Yw_&x,yD/AVK3S M &2L,:=''y!D䢙F#Z-)jh99G՗ JS b[SF4fΖbRC|bDcѹk  :FEr(Zsԗ3z[G"Eɦ^ B71ԵcD U :% 5fcodZ Fz-QѨNBaYs\cZ?ltH1|JxZЀ0}q!L)+/P|U9(.kg=зmYD#(NW}r:޵6r$2ؿf![8#cw9 >aяjE$]mK)J\iVLtwUY؊PQ.jDdR8K<+Iߨ#]UA>wyl@Pk$Ha)XKI^YutБhBH[ %7ԋx}uK/TM>/}P9cݏ咆%Xc<ժL=J˅+5%H OOv9㟅U``y 4OOq68.є ){ۮ{*09[6Cj7 1ID">:VA8G#;ӊp݅A^t"轰>PV@:dg&j\Ĕ3TQ啤ָ (+f]kx CDmaa-l /!373sv{˶*@D%%<8y@1R*8jJ[fKHNo/wɟ<;|d yh\5}oK3o"Gizv߇bNz|ONGK;[Ʊru={״<4i4M(T&ٕ\&PksRkʘP٥F)M1-ғ2;2/U/sTKJ AtD21j 1\(1(\&.@H'Qlec=d{?E5?O ꑩydKk_JF %#dL XT1rr$6~9^E쐓r_P*e >gwx!tMǂ,P"sgf_'H6Bs*V=Ao.T¸pŘggnᬈ{ ᳯNө>)'KB:ǚca+jǞJzcQ[*.QPvX܌Mxtry\4pS/8U{JlOǖ؛\F SrIt)M1Ɨ^H/""UBEƾ5@уIc#-o"&i!":jN)꘍6і%l7獧KC`lioʹ<]ᦳUbK *3C 6*'#e,WVS`TbS\xvpjŶ};M\܇%OAhô:eX!5:'zؒumˏu&1jw_YŽRByM95' 0Oi9Q@j'A 9'*كf忟xOddB@RHgBD/Xd/8H{+<œ:Fjv9|pC 5?J( XD+oTDiU0)Z!!-T¨-+sB]}ӷ>M0{/͟euޣeQ;q \l:C[Tp8{ZkoŇuȳ,OFdwmpˣS-V -OW^uΗ,g2S+_ޝ k AK(K$R)55{1xT7?>.y;V`1sEjLïvuh6 Kv:ԫ^|AO#XZ'{~^^Mj=:M/.{_(-eV+Vju7+=~(kw]U]PrRfVa4>}~IџL`||n\ἄaޔ)&NVmNw,|3\J>d.pl3k`|3}:F$6Iirmk!VV"i!yͤ8%å!j8Ip*2}itbИLк;yj<7[[;zn2Ѯlr: -^z^/o٤|!_HQ+f6)N80"6)LѲ61 =;EJ j+@tb wy_w ƪq{_7}x>}A4 %Xn¬Nȫ@`0n9ek @9BX7iRV%/S2 }p̡HK(@Q019"JuPםZn80Klk);Scx/R,*Qϸ(bNYDSfyLJ.3#KstyYL̙tq_ȇ/={8=_OC֛̓ ë/W >Wk!Zب""A@ҞP&e :Q~

$g49$t]]զZw:ڮfU@Nm( 1a*<.\c@8WFJ)㑳%TbNl wi* T-AuPZVYMRk/ZJOʜI<{.<:At%"2Fζ ҥ8LfAIt:P\Ic){NxM&im,`Ԉ~e|z96&d#nYRQ/F3FSL G0i-Ʌ$rNleUCu9eUpȡ P6g:$F˽LJo?|%I Dj4DR̕ "s(yJْɒKGO2=0NverkTyi Do I:ImQ;Ewʁ+2Ac< ͅ4vsN|68J`H_S1+tV'؄d!R6־7byFiap` ]v 7Ixp Gfn}\e4 G˔̑R:>w@`睆wIgo**ku3`epd3 ՚1wT)2F|/zh4MN׾Zzq^ST,_Hdp޾fy5˽^?_95 :VPliRd{J\`:qO49?hi_so޵cٿ"m"K@c AhV3r%KUĖXU{.U$QJ6}8|E{eLRdgcI6k) SJ XS ,=4q$cN!F/TUuUe:xjkwgۍyfL`Pe(b 9dJL B*W$5W$]y;!Oy~NUD Nەim~=N"AM'˩ߵ 7Xې}z}ar5CMkfJh;$a Rwc]|s+w3bZޏ2hQSܴA}Ȼ{s'C(}*mY>d\5AUH&V6q,`,Pt 0V[J(2L=<;+m3ǝNu٫Vg9x.žuf٨vOY<jzgn֗/$;k邃{~.UerW^=UKZ+;y/i{Z>{|&9u;op;?T[¶1ZѺk=']y`xhv'D1Z޺iޯ|78-WkyЦrϷ)ڛ> (١x>ʀy)k.h=k(v眉i燙瘉C72si{6do^|8g_`:+tS~Jf򩁝@o{3{#DkݛɟCvpvyr?^  tz=)[5ݕi|C+i.A9-WmvWŒ@]ue-8rvKogS\oA>h*XB}7=vVzy9,os f u_ǣguG-N~<' , vt}c'E-y~1bۉI'hn3_<y#lFYi'oh[҆̀f5jp ]~骡mrWFW|+Uw5<jh;]5lt{A7]i8˲>2y>@*(Ge2 '$ave7_@1Kڮ2F4Jwf^mގ\^ږ[S+~:{߭V)}@8O/NˢpG}l"QMdC~تcԋ35?XDh|5<[&y]#q[:$8U oee|n_p.325&-;CĬ)^Q01:giCɖqǔW;G3V8Jl33WƾݾM8/ǣWnt7kfCϊ 5V4cGohrq5kaSXFRvؑ$Ћwn-퀦*6CZZo~&+7S gjztQCbixW, K7;K7K? K{6ZiY3enhދJ$tLOH0Kj3~u/ Q~MّXfmtK/s+VCW UCJ:k+啔C+VxWtj5jh-;]5ҕFh9 j ]5n0ꪡetP]F"B ir0t:1jhyUCՁ^!]Y!Vߠt?jp ]5~ (4ztMnH ٪U˃)ֿУT@W+ k!}٪eׯu쇯:Fޖ?-6^rb}cjӮ,=s7̲r`쩴Ana``JPJ솖fBC Kl&v+qﶓv$Б(L{ubKID[?\}fbo4Jhi呐d>fK/ .ej3z NrztlԀ PWP誡}%UCIztچv@tgU[UCܾUCI@WHH9("4 "DW$"osWz ʃd,o__f`6Yfwk@׺8*}7柿)Oᴜs[:Ԭ]Yݯ-}_Z%6t|NO/R t"]^/p,Yk^.?op:^]L+7֒g]ϖM_NfKːN>}q^¬}]+]f*oNqG./upw<ܖ>;8.i{4(|#.?s#.XWGd/s=#QFTFnP?0s#' q@}^xh<1@a/:ʑN{>7&@e˧wr~ KPĕSZEZ9U6zZ"2YS-i kWᛞ_WsH ߑ}vR>Lڏ&% W ?ctOs.jI/͞ 耛>cm)hdFS 3l<VsΫ"I.Zdʐ! m2ʸcV)kU5fIՉlCza;}-褑~kHGU۔3REC+/}vmMP(]JNs EKPLvh])xb/%J:c/TFJ] 57Ѻl/Z -k5mϗj5&娴uՠ4:BIț")!I\ dk0cUJx3$f5zI՚#b$ bºll x3'ϖ%eZ1K[ІNФ4IexPA+Lugc*04keW.5IJp=[x!mQㅧGYV.OC*,qHR%Ӛt k,c0c$lױ8ȅb_7{!UI:Qb%rHTgωT̰ڳւ䪗1(!;FoK>L^ ޖ Cx8:Bk: 5Ї֣4yKVCJq V"krЗ"s2IecvًL`OJӼtJIdX&!)Q-4[YC.$Qas]t^$lȽ` `#+ )D!; x{($]..9ʊ0!_F| E}`ٴWK."b(fF/BYuT)EL+8 W5i8 H&`fHֻU %嵑e&fH57]\R@e ܛo Q] @ mJVm%{R"Ɉ<Ϝ[cT 1+40L@C\+KɷJ uf)[j{S+Tp/X#6Ud&/ɶbdڛ2'hD] ̑z+8Rt"=d̯H_(d)" R+K]Fkx5SQ@,)ϮL#bES !1)ض70^hT{iKYV+"N +8oVǂ+ Bu&[gn&mC7GY6cV#9ihV1PT"_HM!D!N?o>`FYzzzU&69'jձZ$^x:pR?מn۬d`):Hl4-VUz|阌-ENp`CE;:/%%) >@ E&rZ&d^e(&JôLvmx&.8 _TeКnujě q+2ptU۩*PJUUĝm ɜ.+$`0~Hv?n)~?QU1d*"1I!*"+tm`4= RbuԆm.E݅Z| PGu5u $$BLFQ{XRrYD]h B;h1-ut kRI.G>D{Shb- F2y+Q4i2Б9ZhsEQpL$Em4$?fIuJr%5 w32HeFBj0F1K[Ƞ "z;Q2l]`v+IISL1. 5wg"gb?"oV^10r`R (-*1"@jȣ(tY;pBC[ +6"R 1tFS{bc;RH렳ؤ=#/AF{&dX2-l]!]뜼:MîA^ Qђ?j7[A[]u0E @@AVT@: Z6"i3` 2)ӌč09>dN=H=uG!G`Ie(6⊑4 too= \d" ;ͦrf4D- f,\lCmdY%bDɶKg7y/ìz~w 5~[Mܴ1|\b5=?ʟ ӈ.J~so嬿?f:Պ8iS9B߮]{֓i>C7}]'@gRw  O 01 p6  q: *bX: u@bX: u@bX: u@bX: u@bX: u@bX: ȉt@xt@TGr" }: u@o5X: u@bX: u@bX: u@bX: u@bX: u@bX: u@B?pcR`)ꀂѱu@bX: u@bX: u@bX: u@bX: u@bX: u@bXt: x_Pc!ǣ\-FZ^6u@1*bX: u@bX: u@bX: u@bX: u@bX: u@bX: 菅ڇO{ŏ *5^o~iu9@Kkubx VZ0"~4%kF"Bkñ˖,[:ApqDtR+B5O+T2] ]YZ#+F4"֍NW2*+Ecϳl Uc+B뎞%''IW^htZDWWƱGNWz+*6V& G׮NL{{.&34Gצ<6;yovilu1e&N =w&?o&Ҋ9\R O*?UL+#IWG;ܘث8&n4I0iNt3|CC΄QFgQ>3]= ~-wPz])=ɹ] GCW7F)LW'HW͈ 0++G]=y=1LW'HW 'È8"_ <]MLWCWR|5"`CW׎&$^;]kWHWbR#+c+B~Ptut1-`3AC\CylLW_^i{?ܠF]Z{ t*RֳVl}JnlbI--`jQpmpLc\Pq/nϻzs5 /ge5Z ^xJ^e :K?/5S^ˀ~nUyZgܗ Lqyn^&<8\q??czD9fX_^?ۭ;}5}R{ݢo&_f|׾>욈Rz =g~|o_J. J3!C$ѪӆE<'nmo44i`C9]ѱFe“F9僌=gҡ>v1h`RtCLkQǏlBIyd?59Voe`C %lDQJ6 .aT3G^)z@q6npZ1}fB{7).-DSY g&1"=;]J+_]{>^(_t0gt==cLWzuqDtE]0ю7Bc+B, ҕ0>?"`O"wO|vT;9"ʌGW2]"]-^!v JQ^]T^xݩԧ:ld}EZ7MzW^a -'tլn~t,&6!QIPStoۖvֹ}7/m}w̍jj>ߚL?\^{w[֍7V#\4#2cu$Xn3Rq6}yz~`4]`˿)a9=?2ǐ%- SMͧQn{UTB)U.}ف@Tq N6|Y.L p z?Vn*|THԳ Ms 9bTʫflM6j /`zp(h\_[Z= ]~ycn5ՖA0m7VC3o]~W?XfX-ҷ՛f^._eց|o:M@ge2}彏lrw|O <ߦ,o7D⮡VJ 'c6y EmulO$jIWPeϞ{0ϽQ}~v.ͮWi1b~6`fKHLi5Mv$z^i墐0Y9FEpRTCk>8)6~ϯ6c/QU-01MJ8Xziɤ0\cKObU:Eo&{3?Tֻ^Tjk4%ƶZ"{R9'j$Γvλ]x\_~FFu&]8ʝl{.>gCWWrtt)-\lj)SL%9[ ıj뫋'fCnoftsݗ+fm{;ue+Vފ盋y7@}n)+ݫ=FɶbMzYJ%6]CP=yEخ.z;`W|3:;kBFURZBJ,d ,u⥆!$İID#d3w=`l(ˋwz?wS\ZZ<n KKvkvmN}wY?fo.zh{mW+F@=\44y,|[1Yruytr]{0Zzevs# /mrt 4zXg4z$eyV5[{.gɯVK 8E|=>E][Oަ =tl_>TMCI޵2?H?Urj_g{g؄]_.v9\7x~*I1]vmmBT}0"!El咔7|ޒsD%)KR>HSTu|SI'ٻ6p]+DɪlU+1 k.=]D5)ګJL}>fJkk!DIYhĦV8`p> sb#cZ_qw%b1*:.lJD"c}u*#iSVR xmP,ԚTJTm`L[00f}q9%`q?)6{ +B՗i`+M$C.afc˘ od㩈ܪ*IRN8L9O>\/͔q>XY "j*z98fXK4IM뺷ҹ>Z瓴s[鴜/ɵC5Z[Ϳ 6+{% +ʎ+E(`p1גt=; RѧώvBn?;jT~4fR|4snɕ~cie.T hpv8Xs(rTImGΟ5OǯFDy"OG3.&bDS׍D/%QNqgE3J ^0-d1q .Dx(JBbVhG &`m"ew5r3q֗H?TJK.emY j'}͜AZol?bFXt %.wh)A !*{r*e69g|~OI%hc,F'M1 cjEZ)F2j$$2'gJZ|vT2&߸Q\`b܂3'îr:`=~e:o$@ 1 vx2; U[wJ 4KёWh)'*qD5h/ GpJ`~gw/z FHCv)8KDh-gB'X^( wF8'I]F%SbQ{/,%^jF+,(N 7*42k?&d8ˍK%N?t#^5 ';7WйA=S!2o{+g/g'9+{p̹ШY8\l:A'wT`kS-Zm 7!ٹo͖QoCۋOѩGʣo|bp936ygk!R, }I,slsUX},9AEâ]U:6>(k3 6kS(/lZz%q?qCT(,,u/ViӪҰ`彡hrQNZ#Jյ.7O?'/n:*e޾W?-(\hޔ1jH6hwK1ߌ;" KJF;s9WuԲRHXLv-J ߝr B|H6MY$gEF>eҖB`~w86˲p~gg'K)W4` 1:fg7xT~pptP皥"p\6>1]s %Qq '0aq9m}vڦn%[$\C߼JwkaKp̶[귽u)$OѓqO]3a4g#KF c1ԃ1!KQ-IRKDZ|bX(LjSb"Թ%RM'[ 8 ;XmNL S+;OB&vB1.j:6 ._zQ'p l҃^ \$Pfoep~:?ح ki %.iT!{jBs!cFi7.]o{Fykߥ0"urJ=.!&ji.j/ @ @#h(Ynm|"\o2ۓ'qyF< w@쨸>bFbe/qCոM}=75:G[,LNo{xar'˫]l2r|a-׊VVmeamWJԴ>ӪUIeNgA57ޖLԂ]0Ne**Kk)NBT_x_t*\AW0;~'F!YbC(j}ZnP 4hM8p4 AcݰIxNϭbBNMz!oܗISeC}xp݀U#W_^TIx j8XjaVJ_d0dU˄bRS>EY!CVvܐk!+aȆh:Go"*Gpj`D9 h` iyҌ#Yw 6 ;dJ;iוW]T*/,L2Ca(.\ep+!gK""܆X5UkXVc}ʺoEj8-J+(~RLi2v9  -S% x*Rs9}򜶒x%AhGP20:*PycLTXnNCAεNuw8JTPrk9a-erpFT`2tm-D!W{K! DiB:ei2HΣ̂&^&{ CiPl80J9tTbSfx+jI3PMl\,4~m\'';2PǓg;NOɵ/'YJ_?>Z~zZ.EEɍ^ =갊*%F"{UA{&ԅ_Tg(]Lk8i^޶VxA^ ˟GڮlhFpuw~0|vy<;OjqQ]{fѴA!gv.뤬ENU`۶}AvAd|ryR4 _߭z41tl|õKRcJ J-/gT0E#C{l]:z^³߬}X{. HR fs>mIF,h[ !I'Y  >edܭbmb7 l,4BdNȗwo)&/R#΢TZdj#S@e&I9W> 6fx[\M^у׬ص#>f8 cL& G, n}l<p= G˔̒RcOIbrpQ#v{T]v診 ђsZ Ҥ}CTX\@7~9h鞋!kW/[bmoPU!</<@1dSjf80dK#OtIr@/@"7=uB\ކ .7 KmB4[ٻ6$Wzqx`av#}[S 4؍H"nAt,hTu}eUVfl3995c2(`R@\338g:̦ErYDIQŦ-v`Y,,K.G\ߎ?*qcc QӠCe VVT ܯTK`9rTKt-Gnx6z*H V0w`JRK}$j#LD SݩSr))!jA{AyoLՙ +Y׮֝ ;QȢ"ÈNS%ddU$AK`)<@I:Pp fKBF?ɕ͗woka  Ufؾ- 7EĂ?y?s1\q2WUZd6uC^O?>,Oc 6 ftލVͲh{b(`L "jwP\Kw >&tZ>:[,IC j JS55ĿmBwQ l@u^_YnP@.v \M7YVH[j5X̾ԙT\U0]}3nCm  7["+)/W_:$WQVâ7&&]}NhsY_'nl;qfd؉{%;o0Nۉv$!VGy*i|цJ.rd"Wx[ Adq`)LS6l NuR'NZ1 RZ:o^iO=uV{3XN(s=ZiR!}2VokFW wG]CGqu~9'{AH)N)[jHBQ-f7DQK`Q_eX%8< 0O ':R].Rzr^nn*2$YMfh +? ^e)$Չ$6ZI*tr67?#*#9iwM%sBdZ%*r%ނĒNJ*>sx7U*Ag֤eNj Q:%m1ΙJy4$`>+EDL4Zkcb0%Kj\RTL TRTdBۑ ͝"38 }HxqاFɡh;E;;ZO0vIdm{TJ/x0&+}?#%)+s3k /;$᪼|x 0r*.'YDu UW]=:*UJRQUeR+Y0*nq{XT~!MѣĕǍq,tQ3Ӷ;E@K> _7[Cq %DL)e@Apít^])qT68O3Iù#ξ ,i[2`œƄa+ o.)µVS7D+ +N4BBWV]!] b%1(B^OWʁΑ0,`BBt( JQA_4s DBWT tut$Bcޫ+D9vDi~=4-v 0^ԇijXS3+:-]C+iJ3ق@Wv=eUAt5-%%]!]1N`CW-}+D)uutŹ`DWXb ZQ ]ZN{OWRΐcDWXb Z^ ]ZNu QA]#]IA,i3&+++DT QΑ`Jd `ٻR ъo#ʾEJ iDI 6wh}|B tuteQ$ck+L)th;]!dp.~\Ok:\~ys1T EV;SYDbp$3a]7U<f Ň|q-*˾ycRƻT)<Fb40, ށ/ No*# RcXrG(C}6Qc\(I5ƨyRm@3|F7Wmխrn6E6 ^+Oj}u cM^sjKKs?Rj;-~vXC-Q2So2X6MoqOڸX~7w3:.GS2N*on'7+6FSeqНA}g7^;ȯwVjV417&ŏiSv}=X6 oؽu-ޟ%!:Ն]xVQDguܚzty:Ҥe,ɅDYPRu*i^jht-; {`ii1>WSJJ#p>*DAt9)ǥ :-y+䃃9ҕ1JRWb jZ}Dd Qr5ҕ&T rµ1xteC,6W ]!r# Z{u(` %]I>w)Q~ E4Uz`9b@Uh;MJ#@ӯMî7XNoNMWjvZj֒U+~ѕiAWfCRf)aµYzJWΐiVb b彧+D]ora"V^YE ,\~ʖ"> r$˳[RKmҼ$O&Cĕ#NV!8] k*X "ZQ"\Ap\ ެWPߺsr뙸#Wbo( 8jȵ*ɾjܷ.6DW!;yܰbpհVIkWWCpXύ C jF#WQu]}g+nmB)ov0& Eւ-k yfO]hX4[:^~{[qϮ~9W~cθ1| DPZ-駇VAmD}mu>G)#,x;)o'^Of&ˉu<2E}p{/|D}Dh3QĆ`2q5%Vs7P+{~P%!*B[G u/]|IRɱ|톳nM+CwXZ\ ùZ ]k2O=t|mL W۝k2OjZўT:g7G\=v-m1 3jkxvW{QL#W{^ 8X\ lւֻ}PĕcEB] q\ \OkP] Gwub1Ѭ] AV!w= қ#W"LѭW~= Zpޅ}Tnqu8[R;;bp]O18RˁG\"p.!ׯf}Uw\ [!qu8Rce)mMH9Ӟz ccKC/ԋlo̞$+2u=w:wZ Ԓ{ۏ 8_ք=nכR˼7fCWb6b^|x`;~/~\ݯpZTʞW(Ḻ =K?{`˴\ ;>{R别ĕ xMCX \ j=J:qU+] ܰw5Ƹ︂JpiW+fZ Wֳv *W+QoɭWq=\Q\Aaq!!ko9Op3Z\ WC׮W!:5֭g}3ƽwWP0:$\!15a*+9m^ؘ;E Yq34 !Ƙ>! w}>lp?8#xV=jٗNk/a8'P;zqEK ̻>kc\os [־@zzܩ= vǸ%7<#1K%Y w=j_j*#~lSw /j 1yv}vzv6Z }~EzW^\}MP^n;g /hڏu}lrwͽ@j:S`wf7?Ƈtvv^һڧFԘ75vNޝ_"V>]ؿZ7^bX9Vyt߾VgS_]oT?{ݯ ӟT}/'~f^|^c\bukď|7JNhm@vÇ@T1+O*Gt,5Gc{G ȷN7_}mܵvK(HW?œ@?^NV[ZsV7j6ϑ3Qv5/x|ڟ~{w#gr}luz~͇4u~yr_?mOT}ܴLIAP2cW0U*n ף^_k攙9Sc.T3+!\mv +qT/1vSMb5BРR+ME@1b %MIՂ6'FIUK VfZ(\ #s#f:^)KQIZu1:͛oᨦbwRu>t1D&p]$ '%-pF1=FICKI^01ZOfX.G^3bb^rJƇ$ф`q^~FՉ&S h5r{`Czpa<g[halsP03!MGB4 4fT{K!qhb0h#P )%@$@mMu/ g4_\L@X'͋ŕM Iٚn|3!ESO݅ ֔CQd|b :`ў%C"XdGhOI2ԆR3u4&kR@)M(VMUa\R,b`W 5x o0Ooux;ڴqa0P8QN[<ĪJ2hCgG3cMm̭n\%ODXEi01=ƺ?sE.Q2v7kdlzdZ{f췀9TmA-*vdZZE\N=RQÒb; ܂Vҥ:"RAԄp2QA6/(\+b+HPAqT %.iZ|V2\7Z]KfɤXr nYGZpB] aLaPwi+" e :oE@X#a,Lh wY52j6ntR,,8`&\ %4)8GC%;V>$T 0fZOy8e3XGfr.~+4\LEwf1)4GRFoE{M#%l~GBYG(8?p|F*$9D5ڽ))"FZ>DzFsyi)Dl[Μ|p@e" %KMȲ0ʣniF-x_P>6ڵgM623Au7jo{1#.M5quHNoT *[0eX;L'"B`B@߻ L|㤜z.7ӱX ޺|-A\/ $A ަk00`ƛiҗM:V%9ҕkRGUaˈc*JHv9!Lq:/*%tF\x(Z V<@E&rZed^[0P>ǚ-.N1Zcx/˪S[WF#X܎m`C÷.,da\?5 yQ"bNLcmC5YKae0 v*"}Z0^yȓ$As! mK 6B-XBX;K )Gn{h"! %z]%v '0t51EbҢYB2m!%T ]I Xc*+ +EJnwJ,ӦQ#{u-YPj1y X#XcwaZ,E, L1c4,?=(FĊGE^lE ת;Ģ򰺨 SM2(3!2nBPmʉ"pk!ZDnwBhϢ;4T, JƛJV$㭷j#XX  iyxw\!̌7*#A$+Ϸs.W'WPm:;ټM{uz2׏$j: F]úI''Ja#ux0NvL)ɔ,HuZ5;kMP'B)kBm3Fg =nz4$ߴ͈=XIj7LJȀ-ᤦ+9_PnDV]q.([tR"˕d*2J`(HȶČ,-OH-)qC[6bX}P|\/whE^H"6J1N:YX' 9VA_tWCAW F-xQ"Ƃ:j#pC^~koHmuh,Y ӘNvcPW[YJr7}OQD;mTDFgHszN zr8e nq A0 2 bZm.:8xϢ~\jl ڲ >MnuA࠷$ktt^;hJ+Zw"UFŸ`ArWa'멸~2/sr6U&Z[\+0֘SyO RJP+P@* p"#bޭhoef>ba)n\ S|&σEB-1٪BӼiq1e2Ù9e?B|^_eZv3%rh<ݗ4؋GzqSm}Xu<s7M,qbSb*&>R}腠|S %}TKHYjeo8 װ9ID9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9r'2uYT8 ;E7F~@Hd_޽9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 ~PEbM8 L r }@-q9S䀘#r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!r@!Щr@(k1rJ9 Py9 Pi$r@ q? 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 䀐B9 Nvbx>xi\RדSsjr:_@CJqu? ,O-IX=–@.c)yl T )`KO^C#t 3b{cWE.3}]J]]H@ս搠R6F,S+ݩpk4.fٔ2}#WE !^Tl{@? ws(Y~!ɅgLPePzmRȦA"76]R.jyʢȦ+E:~T`ƎmWvc%T+]1}hjt  ){cW Wvޮ@)]'دvR8]J$ WU +&zcWE.W}VvޮJeЮNЮB),twmU+{쪨=tkx]]]h)FU+y_쪨UvUTS+0+իdkjzcWE.ͣVUQtJD&2`toIZ7EƉ 'iWpTχ%ufnko&CCގܨZ8x7¡?[#-}6ߡ?|T-Ye"!mu?wo*"ߩ`q.Ǔ|f^\y5ZVvu:ͧP*:`ƫX1~R%˂eQQͧ7CM^G _rKCj8wc~>onoK$k/nբ=S}ןɜDYJTJb+/2+K7u2oee5rJ\lY8KL*ι.l r Lm>[Ěݣf_3py|PR+XeŸJz ΃tm誒mԒ~'Z*ÜLlw,a:gHrAgi@YHlEv(] ow,d37@_BQ))>ZFҧIE]=~7ZvݮJЮ~⯬z~F,!ݮvL%9]&WٮvSkqj'k슣][2)y }RЮUQI5 >EW *r]UҠ]] B=+QҕDWE'j+PɩB:AD ҧV\CbWVJ]?ڋ\ݛdv*Tؕ>%E"טڋJvuve(Ȯ D'*r]Fwݮ@%%hW'iWNjN7ZB nC;.VYozzŽNs2za\msvOn}FߧpWm{p}Ti 2uK8F6[\| hGO= tR-gxcj;z3ԖB8~0Mi q.Hd0JTY`5%02PV{Fn >ᵥOosum 2ep]# a5Q+Ü첢Dr7h)X/Q YJd4Stҥ8{cɥs.Ejk*\HHZcL{ gZi3eYΓvCx]=r a]#xՑoձO:T]|8|ڕ 栩*OWJg)L|vxłz\w+_TfXQTv#ReV&e(Sg.8Qnvo@Wv$XLj\5uxoBY/"o 1bJtnmFV&/DYԚkE޾VC>U..=7x̮f_U Ѷ[a5qATBrDJ ia1>PmKڵh[D{i9D-I2{Dxm`\O |`RH,0ZuJ+!jS^Pcp%E,t. ͐c36 ZSW4y+5 <5;"L{Pkޣy;Y+}B?n5x=&EI32ݖ 2eV1'X,hSgN(Mqc( Dhl ڨ<QJOuLyۆ ڒ4iu}} ʁ9_2"DOu{ MԪ5#FGGBhM"g?.ˈ˚E-Ee35OfMcXs8tc`^sf JUJ(!D|z3j#QK|Yb$V(* ۳L89ݏ-qq %|/~~Ef,ލ:xpc}=;bx Pp&u!˸jD׺uĘ:k#tefVUTTY.\:H O:*²s7\8Fi˹<x3 4"AhIY"L1S&g3f+t!)DyŽm 8#yJR=(uȞ 1IDp"w.gB"k`-A5O''uG/n*<¢!HF(L LD9ǹ)%3Ĝ*$սw&^[uZ CD= u b< #ԅ4>(g>ο*""T8+?xcT:q(KbI$Jf{5x5~Pi;$mSQɴkho 5g-!o4|(.˅{'k2{r1ٰl? GFpy*Pf8Mt6ȨTf9bl&PsRKM*(I1~B"o擫ȴzT2^"j%ߛJ Q//&mZBA2>vHG/! sotF <﫚F1ǯu!utKPNcxzIFauoO-6wBޭbDy,WUK;g\~&]- E$@z9ȯjCs*V 7* 㴰†Wgbՠ *AG6F<#|X}?; U^8R|mIJ )2tiQ]E>ލo| >a߶F6M/HtB"c;nnqC]!C'!<2 ùivP8ݽc>M p5^7~/ixdԣ-Դzu ۄ1Y&N(ꑽ1ҁox4tg"kݯifni9Ӱbvlषh( h,ҨQC)Js*lg< B=!ϛ.Rо R޲ mÕnU*K Ob vnyf:\yOw⪣}̂>E$i55^ RF, -Ix&-4t/I ޲H{9ˉB[x>ri Nڨ!ۡI5ꄴ)$N̒df) 9kf@%i1ĨR.rbw,gUr/@̖{;5~YGuж \cBgR2{4A#}(w)E{RozAR @C Zj<34Hd!6% d4*B"&z,sP3O3LኃT$ )}a*8 7tTN뭢Dq^gQ*2BPgR"JW9.eF\raZ(XJTѝxX'W7"qȡҡ;5n.fٰz;4!w;&ed0;9d($E$4R)I6HI#>YgI}HJKrYP{td[tBs+Y2.yt,z"%>9cۃmO+(Ds͜`jBA(I!Fc$NGPDn=vF8Տ3 e]4*1}</()DuV'`Бrͳ86:ɉ w0)c$pqw uX8QP'$}3]rJ"rI!2 %GiA S!cDX.ŤdF ]1Dop(~P.UhB]^rDHy0S9 z1d-Jz 6ɀR@>(@ErU1FY\ Wm-SF> 1V}X &^NHƦ[={Y;g׆ߎ͠~;A#l.e:ëyP9+-.|Vb4]J dRXJg+I  O)nyn'7,c'nZk $(",kI"ID7:rUFu Zy(W| -FvK- h͗"(uUqUPSMJQKY=:i4}+թ4cG:W=1b9M!r 6 뢺eJ>$#)@qS,Q+J- z8ېu^w{5Ufk|(mUoKupz~K]}W|צ ؤ M6]JRtjM.-˲գ]yTp(P^r62G,gS9SѨĬ`O]і iKІiuiH`tŒ6P2Ku {CŶ=; U5;rq.N~v1[־2ggyFM`E4@t*+NoM}_?T-6j+^x<6z$뽨ꝷyv٪g˷ZӫWޮ˝Y-/zJmM|ywvEZ.ŒH8чX]_GtΧmMQbi Ou#o4D6hXlHe1SW eB5Gkkk~쇺{:$%&6* {v6l~T@+c3^Z7[#~Q7ywEIzPiFy[fVz2=b8-2d /К'EZ50͝Yf޹.ORR7fl4f,=&JӋ"=9yFWɡk&@s8qUPw 7>.F$=O%2J qvFpi&%H-:X)Ѫxp!Ĕ9S''O'3B2re$5X*VI LCԑbʼn'FPbq(â>&K mo7_7-HYnqw{cD-$y94r$yZYnHkzFOWIV?t`h6@#AX2lvX.2:–2pv520B6޷ lƂ }pぇa+/g#P_?b|6bQY/->`o0u7Lj4ǫ?_ohҧ_?b#ӮŮIv,G! "at:a30՗ҹrR>a*]N}og9:(RMԥ A!{1,o 3ajv e4 }rߵ9'5=g1ɵ sن9zz{?F0~Hso h!d c1y']>]`L2ؘՏI>9oVFZ|v~7|^/5&// JV_v{#3rxZvxvMFSp|Za=By';{:{ÁҧAX/qͦ i8 ZWf vf ^l1Nk:$Pnw?G2{#:߿zwCB4>~k?l7N LJ'7^/Z|y⽧}=?z+ݖAKד[b၏O}K|([j nl~<‡^'os|dG6WgO9~s}ᑡ6sγ{[>ϸg~`Bm}'>߳aᇮ}K؟!C ܮ Cm0um6rluy5tWj7m͡!f>.խV_uCmfmvC;dʦ{k1v(HFGv0i nLC.8V_6 weD&=u Αo[XDc=ۖMZc`>Rj#7L#谥.yC) gR7*b-LCa -P~@[qg87Ag2M1z1c1&NeaИf؉ge[c'8۶׵?{*|g߿Kac#2 {N[>)Lqml)s'Gة HT/ރ\W[Ij:kV!/Wq2x跢[+ok~3O~&x%gG\]כWS_Nx '8|mk/U2}3'בqf8ϴƞk:hmMDl8\! .اȎ4tƏs>هurt6at@19o6OВz:C-pp|uۮ$J#q$߹__|c׍m+ ݞ?"c*$ { C?]&G/7'fubyB0t}褺34;) B;8ҐkMc YbZδ+8+ˑu}O_S{})MY;a?L.d >I| zv࿝ENji%h=/u-'Q O|]ɂ1ZUVJoK¹Nuն^Cv#vzu\c7-|_7V|;\at]ǫm~~jWO'_/N/ˣOVI%?nnցˢfmݬ;ZٸYEGT"?Gw\MDt% jt%Jh=+D_u@]9|Ҥ+yDWKQ6<Jk+out%JpӢ+uKוPZЁNDWGWLJוPj %4t%IOt7FM2(vtuj.IW jt%/>IWBt] e"uRъNI;e{_tkO#69YOƄ0{c@ >x;|87>]^]|^mbmlN0{?vSS6sw&>\pmQ; -Z@;OWS~˕>Pব<_ɞPbw>ۓswDŴUI=vqj>gm047_ﰚIh; 㻓kwB}-⺗qdv<4IMƜѢ͝9^;vg}u'>"t9sM蛎ȶ8$[||@ށ4VIO͸֨y!lBuSlBtgRtSY9xڈM .0X2K1_27C{6qܼp+@ι-9y󣛝"K' s,f Thc*BRWciܳ%g캚̺!̫i~&]M,]]acСCEJp76@J+*ҕhW+գ+kKוP]-QW|/r'\Zt%ϽJªKHW AؕFҢ+ sFi+ X"1 xpi=u%]-QW!$N85b\Ԣ+u %V]-QW1Z^d'pgJp BוPlHWu]X4&z뭬o_6K}՞\~>ht'gRx՛s#7c"G(1;iz+a IЖ %R,"¤HW l'$LlcBMWgׇ#CpI4ܹwF;Q(CaUPuخFEbQE'JhZ452@ue)銁#] nP+X\uz^85\Zt%)+[%'tb` zt%iѕ"+Xu@]<4.GWh2h](]WBjL)NPܤ&dZ]ʪe*%n%aæK#7RIhRhgb Lk٢ VMI)Te\,5Iv\b!I騖QSk[ A&ܠfε4J0b&2oYOF8;P +;%-Z,~ZP:tܳ1Wj̺ff]M3U+F ['*V]=̳:;kZt%x] %ڪ0*aJpޜs\QzZ`,`$`2A-u%X%銁AZ-Z2J(w&rW]-GWfEb`7{i&bZolJ[%*XAS2(qɓ] \MLj6d銁1] nR+ JוP"T]-PW!MdPp+T2V]-RW"gc$64%$fl0V8*!&Z_|,&_iϮ#q~]M0isOF;6L(0]]Qc8R3QxM.'CZt%KוP6Ete=)b`}4\Zt%JוP6Et tр] .] J(]Z!"]1pI;-`JוPƚ .QW蓏HW AjƮ?v%X%*+;'ig[|4j|"HW Q+E5cWBS麊dau(<* OazDAI-T԰JCgiLڭ^(*[6me=i&3Ja:ygvr Z[pA⧇e5^`]B*u*4V&xllC1p=۝$L45ΫѴѢi?ʝe2U5td8ͤtM0iqhLDLaQeTuخ5ѣ+4ڹ*L,mIWՋFcW+tjt%!iѕ&Wd?*0UW ԕst7JpdPhx] eUW ԕ([JpAMt%'BCu194t%Q7bZ]KוPYu]4JIjѕu%U]-PWCMcW ѕ&Т+Eu%>\Tp2&KR@_;=vB)$4Mj4-HZ4-sՀF5(YФ+F=ܠ&XQ% qrܑIr"0MusLsj2e(iKWWc|{D`|2ҕFԢ+eTJUWKԕM'HW Jpg]=6] %~":8]1n4j+}LJg+4CR+NzA}Oҕ2l2u t%tŸ`] -u%h%Mb4oɸJhdP(ZR_p;W[5M6l]F[?XWWdyAEgL%hڳHl;vgyRSIIw2(`jRl%TDp5^`MP_pXsl1X ^: yM85N`w*i_Y|_aϮ#>;1캚pY|p̳ƙ3O$*KW0AWPuخ"] 0]1n;F;6L)KSu"0x095i\J+g]R+II6uumtF5hiJ(mM+D)GWԌ]Mj+Z4Jɨѕ&5bJu+Nz+f"BV]-QW䂏QAתjZ_AĚ .RW, #Z4Ϳ#LF8Wij`u?bJoDAӘ{=c7P|,;ko=2e~y /x3;rjs7_N޽?{Fre !, z? $;M8k"r sn4[G >Uus4D' U9Ozw?^z ds.hڋ}@m(Wg{WBn12 _Eb]7_?gpx;]|Y] Vs::g,>Z?Dn J穜n}ZAY?~]'B[ݞn5zO۳mkf jo;HNF?|XP>>U 5ʻ;ǹ22 f5#wͿfg0>b?ghM\B;~vo. ۼ7/V|9` YN #cKʄJZЛtAdeUY"7BȤ ӕ֧~nш^OHNi@.^5ɩd"j4-xcZUOe+H0S7Q5rFEKtLEknxO9WUV]*MT{ c-hАtJƷ.*BD2V,ޠM*uҚV ڜ :D51HPLַZC B2V1J:ׂҘ&QR*S)mv>+7]4=3S.PTcԻfVH) %e76iRLj9Tb>YLf cWID3$fEE^SL-^rJh`6ѨOJt#M#udUz݁6lӳ6J#TEt̹c03!Mw!0ИUV{mv u >:D!{z]h! y$i|JqE!hi/Nzü 9Yr6 |}1>5ysYU-wTM%9ѐJJ-At@r=ʜ%xu}d SavnE);IM -ڒBGZGWH }fN^DCQYZ壝pՐR<@xz~ "$8dms)T.WPf,tRO.E#OAQ ׄOQэ5 ;gE¥: V@&Y)dWH! Q SI׆R ӌT"yˌwh|z lEE@wNh۫ N6h N@kah:^vNq2Qh?i(QLnB!VˮRWb$htyֲtXBs ]c`1:)B̆f;uF5`keB`T^# Lg)0m:dZ{΂9TUAP#{*IUJ'0'UXlG?5mB7BJJJ#L2b!]AА(8pqJ +Q5 7!:Ci*T2T j,K8JH&`AOuJ+j+1w\ 7CA a ȸALAAXG nJ9%X !@YP6{iFn3%!2֜AŜI' sG !.A f|)*)ԙ5TD1T r ֑tA@GJ LEwf+%RI9n,7H( ;:I?B+mebE.5TuA"F_R]g 1r 9 W@BM6y-dJ)&dY\D d`niXEjE},B\FФq!:38o=aP.hr@CŌUDZ1&"9ih>(ؼ($0D!N&]rw&^z.洬9O秴O ^Ղ{Uе3[Pw ڦL[oT^ፍPӦtdUt4CҕjIJ2Ba1%OpHv9^]'ZpΠyt_V 5ȤuՅ:@vdm^ ,:?jPJ4%wV4%TB;뒄 Xut kR([|Pc` &$˽e;VE̓pD"(YC Zc~,BLJ35)JL T "Q8TDYUQõ`Qy0 !.@IY6 cC9QbզX5{-6tI{H$Q5$YI$e(m@Vӥ[oU4^"`!-AmU($`F}+%L0A7lmRjh\1ڹ]-۞sU&7T P]!lJ''f= ]Z`w-~";-,F ݚ{)DzI(ΓDCkBm1&PN = ArR{R@k80)QC^":$5\Qr#bs"I'%\IbLvP%(H Ш3) @)w!kQMƮXB_]D4cҠNn B`~?"oQ^1bp0) ,*1"@jt01x\:%X:W (]IڈJ5(Z54)hc37 3E5iփ*M3| R5gҼLFALPha3BBvZx:Mv *UPxZ*;k*C-*mP zxq Vp*-]QZ4h䋞P+Bb)AH Gk|T('=zMW:PCX6\b$*mkE!8XoCnҘM5Ua4D X%d, '}\-bw'5':OCױO)=h>R,g}t>8]> }@b> }@b> }@b> }@b> }@b }@J~/6p=}@9F.d> }@b> }@b> }@b> }@b> }@b> : mRZ'(bstֆ}@b> }@b> }@b> }@b> }@b> }@b> wH> X\)J=yPj>r^OWb> }@b> }@b> }@b> }@b> }@b> no^uvrI[Mvskn;7Z]owr >C-=*0%GBm (as-}0K!`5ҕ{DWl+ktLWϑª+V PJ.#]FZ2pn<"?rPZt 61k@ . @]/oWK %`}fuq]A%z⴬tdhӿ]#NfG^lK7֭^J.͝fNAd;Wz*'|UV.5yvt݇\NzDN< f.}6f}h{lVC^}G@[Eդpp DrpxFW/ Qtm 2QЀ{G~~xGF.{Ht^郡+ky=0ByϑDi-U~]:]LWϒb&%amB >[eͫC R \pL!y*<%ll֐ z_Զ8m3[*Gen̴ biM_v8բ]5s|`"-z@p>3}˒N~gvIQ-ɒL[RIlrݝpg2ŧ=}00J(YhӯC 7[u {.9J`Q~oߤ(N#*(`FBD@fXVk)̥ OYYR~V&A|Wcmteͦ[yN#5r>Z'"k/9vR0|%gSt)EXw"U1ł9 ˭EO1X;NΔ̍pUqq[T_EJkD(rfaK#zѬla32њCDRzN\ `NƵRNŵ=VJJ;+t-h)Q't  LP+#UhrҔ }puȥj7jnT#CWtuE;uc%"'TJrt**Q+ԱDBzh ?!uZJh+TW ! + wоݨ%GCWQ]qDc''\詨DD%kTWFsVdUV*QHթף$RRWmK]%rɨDDzJa$)`O C"WSQWZuuU+A\A~%9puaKU+{r\XT7MxYJ6QvJPYf(wn@9=Ô[%Ʈ_e>»9cM˷<|!th߃_$w ]azOYiW=!cgQi1},͝gZFUvbއT?u(9v%w5wY:qmPYh2P9=dck U cJ#68R$#Z{1[,} *['(VH  LzC—>0/[a*?OA^ oUh;1o)6{°7v)3(u⯷4LrւϷ! G\ AK*՝lfK[bumZ5lm^|6 O n[Vs&;ޤ ?UV_OJ8`Bf/,z~ gA91|^dڿ ?7obgjD ǟ+_6֭mWݸ¨,jl}֒xQTģ;Vi_BZYʹ`LqM8-*[WYuedR!Z~lB-; rf}rV/!gl;-^I  G# ,``B1f R"(0lexDFYXC@VWh0U& X^XF~wgD5qScGq[q{VX=iYRhk~<ƃaɼrcfAJbh8,{HQIM"sV1)QIlB Oc*5(K)57akY˘5r4,kmzDPʆ3QyaL@o~_PVSt^>Y-l:u؁ 터hD{L,_ FD$bmlÅعmeaGGA*4>itdNQfi%1  AyVQQJ=j,ʜqU/vR-cÑ#mEE -ZXN F5zE[(ӕ.빶 8XZe'>XeUE\Q*eNko4}c\ׯ^lWVOBsALng~yXi^1[]~86{{ jİoT5Vܲɒ#h1u[r`1-)!8 BJQK ;B`#"Xr#g%.TbEߏ-}'K K6 dĄ" FxBT8D(rRv_Z4 wmސxb/AD8G9W?pDG=/z'eXST#&0먹 x ?sĔRkgdo<踎$=AU'k8IKERLj1׉I5c[#Ds 3]"d;R"j=&}Mvki6yW .εbs]w 4h߾1 J"jބѰ^`HTPba)Pip)oבe3=o.~DyQ;A(odDSJW %؂8 UH!qRCR$b(SqtxO‘)xҿx8<mix -w|>_V*2?ۗ GXM4(0:wLqf/Ö .fy4*kc&PfTetۨQ1,%! j@Ce n%x%|9h|yl56E,7eyR#"n߲B ܵ`9z%vW0GtKJ)Nu3YwKk@X؋-G[PS<2VN[[¦`9Zŝ2(0eZ30 X1ͭqgkgo%,JX0YЊbcTN00Yy0YUAXDX Oz-OznUB3 zܰoC7콿/<aIA\v:9N|J5&Wa9'+85ycAe9'+ )$`Mas!}-$*Vc m?fh.{մ0/}0) OA.$!/vC|B(VO8cy1DB3OM0џSK5ɶʢFi%zPM|p@kgern7`{hv5Pf9ba66lHNt|[E5߬I3^yu1YlkкyuQ=LpHuGBZn}W'U@),$r|Kp!PYC ,<"/`tcfG1MjIGodUQdE@Uʨ/"zU l$]/f0q&tz&eW>s&L(ְVE8WjWG/;wT):Pǟ0o1ݦk :%_A?trwmar\/GCʶ/3o'g<;:oOo|1/uN]NHRBsnpϫ#]RRpy ѿwHV2 FsQdQ4^#֐ѥ7AT C-vürskdpXL'M{G,;eڃ o(GkhtSᦳvݧLM'k|EےMA } M\0Ӊ;U]M,} _N{peڸ(bG:1.Tknt-Xr嚓NTT*c)m5AGKAC&3Tk'eWU),rT1yeL2 Is LZI笑9 <kFLZWJrA\9'd :k<T^yĪ{R).T>J1T({:.e?@E"Ykc!Wm̵{Tg*!ts I5ٲya_uU7.O)_!u<$|$iTf'}XuR:+chV``q g:47m֡TX:weSAFù䶔CcXvQXQri< A(9hCס\n d t?gwtEIfk%<9 RIQ:# WIĕ\(sv�t4PUx[B/"d1ڡٮx2k#.Q~ k͓ W)5 wyWϬW=u8bJD"g)#x013i'8)hH;0ґgLURښR mmʒ 6ZRH4s>HFjlGz\Vbz½bᵢƋŒtpNޱټ@CҰl;f]Ɋ$:jtVLCAjcW6Q =0M3^+E)k Ѵ'ʢ# `DOG‘'E$aVHjǁ,dbȊ a id4#a =*HF5_ a5qak/{`<D6?vED[퀈"n)h*ˤ[!2iMoYD94hBX8Ԙ6@1*emLD.8RPFlGFz!ʈXM;{;EԁpqlqHjdW\tq 8Ɔ6*KdCJѢCgK9) enX͎PxxIl?n?G~|?wd6iM4gMOIWeo,wMM+53Us@uj%G>fM"i'Yf8Qҧ1 YǀgoPA+k<GU%+eF uFY{->sӫB Y8~+>*v! ɴ`hBZZs'+ e2Yۨ ΅A(燲VmNZnW'ܞn n u{0⤏ lxJv?W:`^66{lc6e`wU-iT kMr D4 KsLNIaJd2TSJʘT}e<nj4"|ҞqJp#g9!Z:dd$xeWM;$/%|:>[=nۃ vDfgL)ND-A:Lr1(bJvk(!i |4B@I,'mDX!@ .1Nt!(V3]ޢ)D6iJ%cK+f9c"WƆȉfF pVBX%f5 y'0iupj;U;1IVC1C%LِBAeIVE/H.z6AHfKC׾B|֥/vg=2;+/P|F tf|PC>iP~pK^#?^>?H`~=lM)Vw1Ç+}VJ }p5 h%LdeQHY ^_eIrA+9\ Z9'1P-hkk/HnL?L&GN a?+'KMNq8 1֗gSr֥\ydfz/b78/+%Qf.X`By#by.E RL-Cp^ &}Hp 1> kytwyMwgxW޽[oOqHO֣"di V&F-DEӝNzP ,X0cQK3 ^I0 k+qYV$!pM2kZ&rVyRgMsvXY KXV Z_M2`Ubޕ  m^ew _qw!+ ^[Љf>pU!Sκ&G`hRdI6N8y~N]k}ktk騗Rc\Lg_gzf^oS n6(ϦG'gӋ"N_}''ӕqc_FC2w͏&y:{1#K(qE%4CEUHMy mֆf'+?@SC/[VmQ K ~s2]nUF ve~#VD)sALSE;_M`/Ï'%i䧳~,k5T[R_M7BߖlR}hp_gn˄5/jftMw, ༛&k/Z asseż/Bg ]5[x]4lm-邊^Z#ȼrlərԥqɦ*ǤΦσߥ|~`qB U, a>dfe/ǔ rff-7cBZXY W 9bƮU(zUlC9<<:c_kig!m̭lq;K?Ezp*OvHز buĨGi 8iK4jOfҦm1m@,8ҕsB72\ ^tez])Ij;J@ɑ W$UJ2X6+BJ])$+HJ)#`uE #]QJ! .]-V?4Jj-*cp+.~zWŋjוQֻڢJ*9{ *p ~zWыv]%-J$J W!wݫ(MoUB?]ݽ~5}rWZ>Yy^tT]WocJr >/^~wwO{?M FWӇy> zy@OG,e֊58훫yA~\%}Tݭ>Ҩ<}8?|{'|hڷKnEꑍ^/zf9aR|nvZ7 GEE-) gwC?ioqCO{HT Pm #nwQyB. z@}!u o,׿Lw;ݞVc31"XW/NiA2\\yk?OW( ԥ@Wꡡ%q!GR`Y}2\/2(%5]mPWPrHW \Rr+ыue6T$fv+$7Rܔk}kjoz]!Sr+?sW+ܕF]WF+JTyѕѮ{%lڠXEč W])mJ(s]mQWEӓAΌnt%ѕB 2Q&ułM%nY Vw|bz .e~EQGG(Գ(-U~HX_EuSdn֘ jK%hIP.e/r ܻuAŲ9ZP`\\e~mbM-|3S Зb,7/M+/YF+m_FI-:. tU39ҕ 72\ ^teD(su!weOpMJi9J2ʣ:t] T#])p^=2\`/2Zuem0I]ateYp6uej5$hYL؍ ɋueMWUdOS phkוQ榫-@@4ծ ]m^WF)m}bHőӺ+Mɋ6C2L|t2TL2E,?}O/oʞz |vMntӫ4T#4hO =_(&"`Nkj.cm^iy2RQ^+nzhHWL~texѕѮu2J RIW4э whך[FɡjJ cC`72\^teQb nQW Bv+f?R\Z;A2Zwet$$=M%+E7A-v]%Ǧ *-q+^$Ewek[F tA][OWM.ѕᒛ+-MWO+՜)3 vtbd)fв}\2:cТIGV7e Z-anjA\fd :ȱxT`~`nr36Z~F(s{!6,GkYہ*;KKHvjމӽ;PO/PIiZK3jf֯imķi91r(\_W9} 2\j-a~em>RΜJϙᮽuu,MW[TJ p>cm)J)sMWU)N>  W])ueQ6+9W\xp!v+];"Z&]WFY`p,HHWFW+n殔׮+tA]咎wկ+&B72܂^t9U+6ܢJ)I<=T`D?2\r3n\AhڠXoUGeU<6] )_DA|#~Isip 6F]F izFfOR`FWKEWF[v]ߌb8-1\.%l!0l!.;gWzİha t]\tG ~j0Av+]yyBڕN[HY٩qMWO+t)]9ҒԮ+,T*I 2`FWkB}u"dGR`.FW+!zѕѦPjxҕ?E spіڍR`p*Jteѕ])-cAMWԕH>:- Kn,+XI3zQa~0|47TmfY8%,2rk^rw,l~YЁ<- r km k/fy(&{;Clýpe Ѿۀ#zn҆繍іee;cxbE e5Njp)km\IW(Sb\tǘig?2']$(]JYQ֖it"GGR`Hٍ w h3׮+mGWՓ*0#])ޕEW Rv]eMWԕO+pMJiS~0htE]D?A7+TRuuHW $nteYhQzוQJuUy *pFW n -V(7]mGWrpʀpJicwȇ]oA|P煮`5< 83\?͌U*%֫v4 '.e"Z\+?2RGX+hzhmoLtecV[ ы6ueMW CԻ2`+ehjוQ4]mPW Sʀsr+-EWF?^Wm0 ]c:#1eͨZK5J&9nhSt w?>RWAWKV]WY:|kxP_O:}h %@~]wÛw/(Kp湖mww֕$9#wW<@VݛnxqߜOxcN`SַsaAiwoi>u:{ݝw7uhEoou{+jqO}F*Q[j#0w?cYqT5|6ͷyB0?> ?x=v+;~oW{P ԷJн7_"=@#rKgyQȇ~}q4gbbP\o޽Ͽ^"LӋwzkx;ϻkCūq\2]B8Qp/ʍ%Qpb(a$q$ap?}#O)җa솔B7P~aL0'p80nx_j"aG?UXpNyG,FA4 Qbt 睩n},}$Aܡ~ p/- ei (4jR? S-(eic7Nϥ|/~D./߿}uiL!Pp&EzITPhM!"&j%8P0fƬarڬ]̙,٤~y{ioq!L,w]e,_Q+3 OfY4S8Iz!c,Zڠb:d F:}?k0A[)EB0iUQy22K2gOU`Of*/|A5B(۟7W0ȁ wCw=nmFpY F=%pvp({{Ѵ5ΕmX!n{nu*i/Nzü 9Yr6b|jŏIFU^܍S5ԺVDC*)IV:ѝ(sRB# |11ɇ%JѺ)D$6)hK ci_!'R:Czi mDeivNVCJ VjiS DH,&pZjS6z\"FQ$X'餞5/]FT,hCD ɞ5 ;gE¥: V@fmY)dWH! Q SI׆R ӌT"yˌBi`L9'X3rV{UPQzm> h-͡]K68U& g %jɭZ<ĪuUjJ|Y^4 A2&d]g(M DBΦ$Xd*zbJe5͐jPoB+"X2nP(SP|k(P,Lh=4v#]7cEfJWukʃ A'b̂Gܬ ̈́#Bl coSL3H ̚` Ukq)9oL:v_ bUPP9)ta7 U BY[Rl eB+mebE.5=() ElD 1r 9 VDlZ"52)-%VsMȲZPҰ*4]AՊXZIB:o \؊q)4cMDr|T1&PT1yQHa8II /Uq޵5nEx[ UA׮6Xw ڦHB[oT^ፍP4dYt4CҕjIJ2Ba1%@Ji֣2E ) >@E&rZ5ȼ`|ڄLktqyy{'*APP#H܎m }KEUgUS Y_"bΊf lZA]֊ANEOUCÍ~]Qq'K =`Ye>bM=A!%DB>hAjy1k"!%:]%@_>&#TϺێn(ڽ,C)9kD*ZuIBv 䁀:赈 )-f)D[vVhE޲A+}Ƣ A;J"!ͱk? F!tf% EDiv%&HTyP*Z(o*"㬪ZUNʰ"  De#@ A16%VmZbn4C:kˢ9i$@MVkA֫ѪxdmQi`ZS0h¦Q3 _B\= M F:`9YJFD9kjJ’=iVA^#p \T"N6fTS.\KC$`PrAvƒf5$^&T$4us'4\- 5uhD;S`]pB{^ M-np\AoF?UPcDwNTߞ|^Dl2@  #~G{ZK,%~b[4rqX]zM^ޞM? 6ח>@yen\|JovwW/p{Bkn}}zժzrqyҸts½P? 8f0%B]mDFK#{-% u9> CnzS=RYH> }@b> }@b> }@b> }@b> }@b> }@bsA pI> qp|\> Gr> z>> }@b> }@b> }@b> }@b> }@b> }@D~A> v^:{)> B< ޲JY}@b> }@b> }@b> }@b> }@b> }@b`pK ~2Q}> z> G+o> }@b> }@b> }@b> }@b> }@b> }@z.>?.z~XvCKMtia9.Elv8E_;o9 *\)Ăx_?޽O!p> QaD-Akb} s(cb r,mawnVx9"7CISDq} e' czcDZ;SN%ghD֋hU q9禙)=vr(KM F:DoPBPɶH|z .s'ϴ<,Du?,ZAʯ\"˓}âEYz)_,˿,[+:q8|R߅PK"TcK>M;# Z`L-`QS ֝s8sj/p̀>J\ڳ %3]ЕȮ/m~=񃛇O2+LWD "] ]\.WNW2hgHW*YҕJCW7K+E{Ls+Ea8gHW:*'j\̥ (LWϐL { .nTBW@{rLPz]4DW]]\/m8bPFVWϑ 2< p4CWk"ʝ;]J#!]A]]Sb{)tEh2j5l^HMJ`4d8j%XXRkrEmS5u߃m8>^ssrO e>g/Wh}-wK{.A#nセ7 uwO۹yy4 _ݛu~ɪ H)hx:r 2M\~K^_Mt{WL7H:Kp}c.JsqW:f:dTYef0Qd\ eL>x:ǀBnWy'g`=Yg^j|d`kU5Y:(fS>bƈ]Z%?ٻ9?Wjl>:ʬ-%٨0ċGE4@zzڤx`⺾7_IRD 5; ZC*" b ʌ/+_ȜFizKL?@^jӷWoH*q:$T,4 KkJZ.5\ґC:i _Ի\?;\Fߏ}xծ~KXڿ?w{xt. &FEr\ǒuƻEk%^o=%3jAO~ccqSwmHcY  0l0k"K~louj]l9ɪV)ł8цX]hOs=w`6s!D٠a9.mBH?v;8͓y e\Gke+<H]di;a߿^B}o%>?hԗu?G뼻FoGד)^ 62~Чp6?p;"fϹ) gOo睩U,%cڀ?'#cLQnQ9U/vV-kf#OX__8?m ?J͔h2u9fVJ`Vޗ1yj]Dl'=Ye={qјЌjm59:vˣxOw캯Yml:?OnT-jO bv}7q#4m;4g[vџ"g}ke~z\flW]Jk+ bz imj,9zFKN'4F|DFn$Ӡw(n,3i1ΣK${K]"qĪZz][ ed񧿮~T/.oiO'FLLNQ\"3K950@Ypi;˙B^>T$4goސb< !B/v79tsnwS#vV_$W c5B=) "sL4xdN%:k+IEtdGv<")E'[ٙF4gԚ>78ښztt.9;eL_90d{.ա-v&:N̘z^n(p~=׻X2S5⾋sN xjwD*"8kp!& >ip(Y]'T;61m(cxq:Qu x0qy|! 6?Z㍾k雵a܁h2 n \'X%[_Κqn?ȓZ]WhM|vYvhh_AX(\6m_ƣ'7>mro%|[Xk֭\mhkսH땐X8Ҫ^eӯuBHp2'**kJ\iAKs}9\]V3vlh9 L*Bid!,!z eYʑiϩ"hT&a"FRQhA"QX) ds`Y_N' @+YG_9?jњTYm̳ YV O47N`H*ii$I Qy%v(قp`ƂWIW1! 60bT 1*(#c@,Gۙb%jӲZTs5L:xmCu|&x-bS.foZthU&R-Z!k4.}Ix(5 pxo4 FBAr '(BVlslEA}ѠRE=ٺb7ζn}:g7_JIJ\գo&*<*`(dWZRܿS7kXo녧Xd3̫3" (u*%.hmAH mH#x՚&qp?*h$ N-O_~ W6C.gތ9(a UOķr 1d)b%I1]KZ4=8.}A+ hFF `#VG#$%NтR /EJ\g}d2O]7+_?ys;baSԖ,_˲ PLpSM4Fr)Ҭ;SKť7T27u`Fv7Wͷ܏/ >(WS_qOMe^39d:cs$Z;fޙ(Z^+D.[q22t9A{B*Fj&\9G`T؄6,F8-j]]! \' FF3 zl"(p0oO(]wޢ >c?i]m&&&Ctn;ۻj6 *o82W9]e;Wp^owa-RŴMv'-O8΁[,B~gQ=:hS;A;-=9XyC~0+ mϐM55ƩnC؃5Hv}`qµ7gp~ڿ"}lښ..v:+E(+ѹwCdFM Lu2tDnySك˶=u4s SҖӯu all!_wz2[4p7%VY?ɒ:k9I,ȬTv4 q|/mj6r7?dsWwGV|ϣB>J7sq(<5CYK\b衇:R=Ka^"ph:N@3hqO[Ϛ& (w'7o6pC BЊk+7h-֧ieVZn8q& nQK69 *T\&)6 cQo3cc I|2&q#\xJ<80Zks TXB1r'64wX:M/_^"[lbd垜'kCzA4Y%ͽ.:R(b+ W,fY^?hW=^$g^0+^pGT3'St xl=>ƻ;*ԡ/z:ާ,~|Mvh xKZED` HjQ,XǙ D=KpWHdDT ҁD @-E T< L0Rs"895~ "~~ڼjZjZޔմNyGEfq/E\Tyθ=GPИ4B^ pk) (eԤV)&Cb@s'TƠ>\ A2 MJkb׌J1]X3NՅ..<.HedQ̥x`̷'vO~3a4|~/\["NjU{(K-BXzTY,4K!{6P8Fds/1E m;MDn֘B]:QMaļ];NڪV."2I\{u4@<Q_FZVǩQֈ׈FiAQG#0ChJJՈtT 2QEEhX5"G8 $iz52JP,jv\1{|9FuĥvgKuS)M{v$"$ 3B8μ&"~@b&PW'WB=z^}X;NԇՇևgPaO0wufW\JُhΕX8iY,+&NT E^ԧ='QĹf={:xb6xίwĴ~q( Խ/u+i]wh~ o%Mv,Oxa1C:-/u~om [11\1.-m$Cd!@aa"b@:䍳s}d>ؒh{Lcx4O)/\{F,ScţvK2TV[nD) gΉ?}*/ItbT{ڹ+T0;$&"B Տ~ Hk&_,~ʄhRC7E6)LOOtשU3(4%(xFʜ t{ڿO"tJ6TEv穂 + !,i \IHuLd!+P ؉|qy(g' vLj>^xHȧ'mĺI7tifxSV*4B :*y`%#x\'*&\HwL,I3ĽFXkzs˺Z1 Q`AF筂.=Y},Q(͗w1灁Ɩ,iGL1@FE~xeG)uXWW)&c=qv>?䯗()z8## ;Cw>OLC~ElD@@j>uAlH(,@X*#>Y{ IryOr,qgOqaC:h1x썍=6=ERߵ+ݛX&iR }&4?z5`pi5ѳ@6uo9O>7,&sk=g@>G{FVVjmjQp53zNkJ{J(U(Al2J-jKzehxλϩU+V%$:9 8]uڤcY.)hM Ct2L1Io#D:3G2+-sy֎-7YW7 k>:TK[R=]f4r+L%4HWEP8,4K7tg7uS Ӈn~3!x[".i7qQ|I~mnR9E-b$+o!+0k= qVd?59ydv'/nDjVD M9dt ã{_j()z n.mtˏ,/ZAE6]oj:XVmGwKDgx&O?y=X,h)}F2r+trE cȄ1ˀ`Fbڗ@Zre]3{ede E˹Ғ ]L-r3K0䊠PM.Zk ee|KۻTvq\k|m/ﮨUEwB"e[in*.XeH׽vF{=+>^!Hi-G%[+񄩤p:jngr&Q[!݊ɶ&窢>4 *_iVQ?YȈs8g{3yAE+c.S<˭G1 '9sիGw"cT8"bV kpHUڰ 4GM$\٤>Sk:Ҵj*js,v3q %9$M\S%RFi YaO#qrINs~_ PH눥dD_b~ \x sk YxJ3EK0B{icmx5y98i=|g]i=onn#SoD}P1~'tr5fk=>|uKWZU92FǦJpd_2ԓcĵs=m<G {Bp $As ֧̜ <lCR,Dvاk 7COLh&m%mݿJxy1<%s{cK~gѣ!Ɵ~?mb{#T3I)KHӥ7E!(! wO+`i="X> =0|?w{g gN7'-]z5#&oqvr^`%U{5ОUU6Øfx`4>;o"ޔ8"-ng2===<7bhUB, L> dD콏.Er&RR45pF ¨[z8N1rD2{9dPD͔RB9[  pŕWuiAzQzZlmGġz)zf7 H.@(pS0IH^%ZmbDIX0Q,iF6Id a)3:맛ڼ犿Hz[3}ӧg/< f|8/u=9Yˋ0aypԗ7LKͧWthv:CNSfYq8ͮHp`$.5/5LJi6lYyDr82nՍnszB5(F% Lu>Yn]A ҅ԡ 񂼝EPV>[3lq2t^e8MZ**sSeGcQF8M][eH~4A}F.r U#{9F|~K0%`q٤3zqTL;',5,C ,/Ck+N~jX捪khvPs0Mn>p1{Yl h.Կ3 k~*7Wp轓;SۦӘ&9͛JE=RKw|!7z6M&?-o.}->ClvYZҍw/7upDtõS:o_Kp9^5hp,)gxXB'(7|{ʨLNy-U }Eya]ţN+k"DzSk/Z<}~~Bmua,0gfd/P{a83`L1 ׆fDŽ9r%A:2C>*wAgcu v"!/%UX"eW$H)uW!\I4#+GWE\WEZWEJ-{ xIOpDpUhઈ{<스L:\)9p \\Q \qѰ"1W$bvTkW$0ghઈXHѰ;\)3pJk<""£"9"i-;xvU=\}pe] .Q}>qO+@eH.%}7G-qژ&x2ì=SP)\TM*D T(.į.d6 X zmu/n }w؇ňtvGm֌ѥWO}Qi^}VG4ۍ))W!)ޕw4IE9>Yi-Ne^ fT3D*&Jqm5omph&A1rͪ*g^ވf_IJ-&2 NA29 lX~»9~23aTR/ZM7lM_+$]mz.*u_$B`}QtEh]WDiUUu%F+6]Ѳ8Mh]5K mUu%JD+*]ӱ ig(E QWJiadDR$]4hA+i0G]ZŤ+6p~3EY:"?JzW}ԕ%HWfqAG3w1\zW}ԕD+vZG+ + CܗtK]YHWlFWˣ K꣮ ^V3;`T"+N .߼N3c]DF`\4&\Xhfn2YXZoz=b=X`)d{pVW^~J{J']mzW6"]!T6]n׽+?ZBQZtC] mƤ+Jq)M4"\cuEz+i,"]IÙwE: ?(C{tu])#t VuE z+07fW]pZVuͪJôHW̥FW ](C;[8(2N.]n[%h ]WH8K꣮8ͱK~ke/ ,rO 8)M ڙUOuAUeU˒shWRڼiVBvt5T̎sktyi&]<{$ M7@҄>@^qjYL} qfYt!iYѕ0f=t+/`>uՋ+ʋ2*]]mCpEDB`\4"\cJxUu%fF+$FW ]!-"t]H>J:'yLB`2]!1gؓ]WHBqtu]Z4$``pEWHkX+2骇 @Ƥ+p]4BZ՛A?JIW=ԕa\LoX WXtE*Q^2'HWY@& D]! mUqte$kmh |̜*=J1@?er;.ThyCyMp!Dk_o8N~1ގ''zmW(!&CW6jsnIWlU4B\m,"ZBQvrQt%.&]!2]h5]WDiDUu%pBG+p+C+ ]WD)!骇Rd銀FWuN^?ڮވzQ:+udzAaQ}] nٜ%wφ' ~fZOo;_'s?}<ҔST)\洫3Sz,Aw<ܥ\uᔋ\.k5˲yk-I"[~c/?팁Uy8oԉV X-E.Ħ#ʝ-]kEYet`5`Ҫd rw& HWo?_?M77aUua뿇(ߖ{uw^_a:_.h/{X&%7Ms5XS? ҕ *56@8y!(JrViik]>. 9j.tooP=dv(&/[}I nز ~Y>OU?)~ͦ9IMAuoYz#?.mL|nvsm X{fB܍ۛIޭ* ȧ?i5X}zP'kna~s/O?]̩y7 l}f\vJEHvus5Vzlx4r;`58Oy;`;k17z{}''lˈʹeWg`nv(5k:c!C<ν:ty}lV_\'<w@8>Q}dK$1DZ1'֞54ؘ2Jv}mWQ&z89:4vӏYdegH̦f_יQZ)dEشJ'3/Jk24Yw3x'^SZl ]U{ԡ:o;~KՒKȦ NyIrsP iu#.ۥGiytYȖvWST^Wf mu(*Ю5@sZ]הRje(1Ԭ OQQPvZ\RaW'˰_s4k]ʻ:.im_"~!<;V(eZ!q:Q^]ϗ3O'49.&7[;ٜDŽ_߼̋ߤGP9F @ǜsw5Abߨ:xu06}('>sHv}~~I^blSVwkb ;T&'CV&lEnN6wm}i}ƣ֎ky.tʻp'\SJ;C}[;硦V4}Jq_}Kϼ QVgjHN>0v 0RJ"}kF2v#Z'C!biTnt~hofu(k>U1trP)J^W#Q5g G?zףw*LkzWYwX.rA9Zji1`Y)A`N pA]BUxYX"SU3ZCoO0ȲuRy.EK# TX+Ъ,jX0328rVլDֺcI}7wZf ƥ]TNϜ.=sx消CZYyVs5T\V@QJa;ALk2,D˃RG¥N X#֡>fO9264`D}yvmdLL4&\c4$K/GnлSC[ļW釫:Uj֍(M`\նJk"kFWkl,"Z )Mh'K$]EWBY+LD"`âhzWD\BJyeIWGѕ\LB`.].D+6t]IW=ԕYthtEĢ+U:t]emKUtαXr˸i{)ח55{xNEqJ3'Uk\3g~r &#CAޤ^zHlJt PC^V9[\ׅ)qRf\8162<hf A/n!>~EpF K՝ ۃ;&t]X'6fR_ٱ~QV<=h?~\N>sM#Rn_Q(.#0p bFRQ5b۬܂xخ̞NuW{fOJԂ`՗M}mB/F>G?;$͖a~sٴzcZ)mpV7OJT.0 )"jXp'\icid=Q־?bJ1XhtE`bњ'DiYUu+E+>J=pEWD"X?JؙIWѕLXX*Ueڄ+lHꏮJ4$`wE:箈ҥ^Y\$9+=eM,ٺ 9c{;*M"Ldǁ G|/3z891#00͈peDC* QBeK sMK:UbdE Dq{E=jN{詜ii% ^DM40|gz^מDC=)M`;:xնB:-#qWXtE҅+tC] 銀0ڗ*֨uE%]PWR8LD"`VuE+%6"]g0HFƢ+(Iꡮ@*}4 6pEW@9l"4wK]ttEGH/]!qU+OIW=TU htE&]!_ +i0K]aENX=q i.5$\`DU4?J+Sbp3#tؑP1%)"E2h R*_ņ#18+/`]:^EDG2~*NЕH62"]qpEWH˻ڻGUUu%@p1pEWH 0c8}f/2a|V'u) ^z7kASGo`<^ oe2<-j%aFkw*l%(ȫQ{~yY(M|T R g=/=[sSvȪiӢTۣc׮,ѓrzu5^ ^Tx.otgl:_qrVNnQ+a6eֻ!7'M5_ckw?r|'Rz>Fˏ&BvLhlv1=YLɟ|ryWgWˢs.FpVϣr)-ZIZs5nx! Z-7>ԧlXLFs4`;~eVΰŤ^ 9yZ@r}zmm<ϴhfAi# ׌.N&Zނ0/ч'Y,.Iƕ6BJ\P>PLf(K=YiA#g"?dH^,Y;Jl-_rNEk,*UԸ {Bݲt#ť>Q=UX2(@.y;# K6-(¦ݖY%CB2k#b8"'#JCW&%ddF,+Jfq=IY~ѥOQfmDR'{c {E%r;-bH J)RrCJ^2MRhF,Rm]Tom,)DQo}8~I?Ć5? $It(L 0K0!HD<% )'j pH2 ΜSQ +*:AE K}a#~Ƿ\_M,G|YY[4%ޘotm?@^~D ($`W)L+3x"}_I}ى.LD{Y2Ui&IR.KFq5MX ř0J 3,# P Amm2,Bi1T!eU]b!;šX]084 4^.jI-s )('Z&˒"{[KNeV˼PHvPY3b?xtɓiҀ0 dK {6"I~ZL$''-a}c}jI," SD;hjBbf뷖;J=4stvJ( OmZ |rOvU&K rR~# EAoJ_V5`o-Rn wfGByq3X͘Ա Ӫ9dhRB|_yʁK7n,5?Y%ʗ7u_uAXL%`eos'6No ̍J`:&ӡyfEhS~L%](@jTpPH2H/PaaD9D^ҥk5\:cQږ2vK cd[Fп+0 Bk r0ׂU BtVGwIDd40<=1xH#'Ga:/ìa+qe,LE8B};PB2ųh3Uĸl"hKFJ1/6P9*fL)p򨝎,B*@yv6<`Fy$'CF5w9 C9g(<5`R*2 S.]fסEc:+QP0\kE%J(t!yW`> $d⻉,Vis^9^ZrrK}G2owg 3I>]1MM*hnNFzxdW!NtQ 5(߱n+(xP"1ybO L)8L%ә#C~@IZQIW,Wy ?x'ߞ9>.(ӎl /+|bqI"Mv`>%.`|I9gԓֳ8gPˢ0hY඘H9t0|S1~|G,d\6==q}Oii{k6F,w*71&\4 8@H抱$M0$,X_~Wa3[>as8 ^:J?'ex!d2NdBE"h P ZW{iKhLRV0Djee+J`Q0$bpF)@ #45bT«WfJ BT&dO% OcpJMI180K%cZOC 0ƿa00Fq28Xh,Of?g?<tT#iKŠHj<J9VY垔0À /90x4T*F4[`Zac©5s|0*ebZ4b+Ex1gq4I9㟶 |$z[a4CbSJܯH4, Wh>jq;!ǙR%A0yXD$ gXweÚ2VUcuN 2?]Fv22JPiCyjqKg6sI9z3@ oK kl5t]WMp1"-q \zyqFcʨHeʨp*~TGa@26r23nq ͐AAlMZlWEzA͵ł*1}caB]rP`te1+4؆WItg~>x2Mwr~qAtjϰ%:VƳxV1ZM6lc:bٚۻ}Ҡl )YR^Js"bBN5O 9VY7+_Vbx{pCXo@܏aymU>,Jo FU# 6}x8"q1SG#Cyd9 N{ǷQwr^#aMv9ԴufӶ/NAC 7mҸ稌#[˝=!6kG6X mwAIn9zl..m|-^<cKMY\(ƢYIn46夗w#ݖTjIcTvv15!j~yA1]qx8r*;Cږ]2Np}\Z.Q%!8gG,!JG Jj+:ŋWޠ7(=*j@+ @[abyh\2K4ú0%:Fk3h!0k/E,WэԀۼkKͅ =cEcxa.9,d{鶆R}nD'۳؊9KRNC(ǯqg@(F3 &OAEğQp8+Af H "<4h[1k',,[ɩ9!.6"™7[tC6- v^ġgalA)䡯ﵾ3UC D̟2.P.YPjk\zu"Jj(1zԅA>-qde.y$m:st"F5sofP@x4iP&fʣ5(n>OmX喵zդ\R<fϨL.\,DB;݊( D`aOء!S4O2{20ViE&`C)M YUvu*.|m'zQ<~UxThSzޗ71 Iͺ`(`h\j&N"|*|֯, wûnl@,,p|JcBeSf3,ak bn"ne^ o8`aRȤH3B(2LrI!hiddLi 6]i~ʘ4񤄓 8';@+2]q]#n/[NPze9N vzGD`2hp06l, 5sA_ӊ6>NM Wα/43<s cŒ% X)ѐHg4.Y ؈T1OF{ 7)+qۜ p"=OU}EOe AFyY =~x6h~פ,h1nm+մ~^+6]+[D#̭ 5sˬ@"ѷx ѠOuZjĪHQ(%1 2(Y֟kRkܖU1#W하j9jQI vh>v#]t:nр}G(&40@ 8ix=)HTKct8<"S gۼ|Z[ Xd7֫pR.ج٤J[/QȽ*|j[ƶR=9V8Q^7YnGuwfMI_`0`I$W0k)}su? qq" ~%|O\͞KTkmF_Ep{wrW1@rqf?lٲ$KxqؒleɖF D"Ū_N BtwSz"uY If$ܤT` rj-LѬ1I`GyBuo݁No="+ZJ)Qbu)t"7}yr‹3u+pf Sđ];k=>g\ repZqNHН ద+HNQ y[}5ddyYĚE?ek,C{}T60 b'U؞e#$B8z(8/bgd`/Dp|ꈞA_IYmmeJ%zhE!j8~"[[Ѱz\b~K+P;۞gPmGLd:iFYTxK^Vt/@㼯Тe t;x.s?_gX-qʂB#DI3.8= 躮E^0NrjPbOѯ&/?`>y\dzZT xYTYқX5TZj H* ܦw~ڌ:c/xky^PGS>̓qf&ϋPԒ1hOm=('Klq) ۗo<0M5Sk`>sE:i't\]O~+ uBp2l{Ѷ1)ӿ]>'ٯa{e{gł R`Џ<@ W=2fxف]{@,/QYJ7^KVW]j]x䗩 :ө ɆYW/*b~gxm#h,KBy'շP?IOe[ F 怃wBycؚa%=\(PKc4%;n?lws׏@ 11a>Ngߧ _tjς]u ?㰝W 4,E5t_Yh,"G`;5F%`8BۅRH^iFpx+;|z3sUgM[^MhXuF%Uw%-^Bh>/ib3 ʌQ! ݘ^Fjb|6̺c+RgǸ$*ʬ;,9T"?">j[8}"9(~3hMiKq1EZ!#93&gW.U>pq5F7BXd͖7OL-v[xvZVRop|!BKask93͍7hBqHAl7I!Kn zY9V"k>8p BY;c2uɇؾNɎ6ы! F"O|Od4m9\KO7S-'oϰy-fQ^a f>\ ;Rp&ՇAI;ci}S {FDS$JPFRB{* /Hdsʡ@=ʇdw|A2*/۾΂֯Il{H<8BwYp$auV2H¨VmTsP@7 ̝[;u{8`3={4kŚ.t 2MAkX?ufW-Ȋ+m֯d7ڜ8l-l4֚jFp3k Rw̼dn|ؒfy9%i~Qz1ZHKx-\k~<єVb6m_:#s{>Psҙjɗ[Ωm<#JlY)&%>Q8+Z|kgڝB)dR߾iR^/+ iCtp5#y XŇUY4gI?VivptuqnK\KA 4Uhl EiBWp.C8cE8VZ,RL"XrE!rmT-@x |m r/6gp^ξ,mlP9/^XWGƝ'(,-PvsSxXGxd!S,> K;:Z:8T^k׀*^cPj~ԋwۃ#h. rݛ#.!15K&^S <:vm50ֹ~3 4g QyD`/Τ`NۘF)if.@b4H9'o/pt[]򵈓MsXKl4Ml^@!A"*b lr$:onS-Rqzw4Mv %$ǻ yd^#@THP&2\Q zWq)IͅVvA&F e XO+E:g5IV|; t\M@.$ 3/%4?<9gwd,PL>0On(eL/67'uϩ2)L+$2lIE גd8JLUG'e={}nZ~Gpva,3ms?FNQ`>ܐB=G(=4:ҰDJ|3 l MG) VS޶zZtI=:\8Gu?b.|8/:h-A ;$ ݸm8(8dgg`(\=޹UJoyd& RLбfby6{e} XZKsW3)%r{dQWR.4, YdYG3iTz5.(Fm4?8M"zB:X1R$Y$Q,4ǛZ2y|BKPv[>?֥@oTV!&}&1=APJbG1ㆷ.Sh|6f:#ݼd-Z)cY#e042^|T+)yr7b`5\"u5uSlp^Tñm³jQ4oܪkqQeЎl ISQ 5_ExW*ҕ]Uk"ߤ A@9V/)cD2KIBaMPWuY@w> !@! 4j8-yZ3}c1s+V+'L1d8Z^=:xJz?ڸb_?~>ͿD,Z˫ߗhoyśQ_C*B*v$8қ$gyNrиH<`D a(Н$,eRe^q"q/M1\ cT>f6%ukl/k5Oz0̼%Ӵa%Igmn_^(IbɭC*P+.ʑ6PǞPqL,K㍬Gw)Mu<>"!TcU,[u׺6fU5-.6cqiqo:Y7:aUuJJv)͒Ƃ@d/x NhG ! rDKPNu6ґƖLOұ>@88"CG<)5tiw/z*"YXK aʄyR6 Ga})%@ 4n#oZTf4yt_O9Ps(/CCDBiU+syG}X)(P?  [#V46k_qdVBO ]U05s4!QojlcV {a##' 6~ X/QZNkr*0F[||ʓ0 c^P'@h<հXR ue-n^m7;Ŀy")`/lPM Ǒkwky0\5ӈ }Z @`zѮmV}(%bVCpfNêj6 .fzA@s%ӂ U2Ii%1D*_* 82k/!_yJ&FnїqT--)c |F7/b' Zن={q%C8>\3!G[&$\tS>jP;J v2D|XuhW:i YTs:m!Q|%Jdzc |=䕦YDaf?^&e9σ񤴾p2Vg dhϯo_r:zQXKi*8gbl47|/Eͪ$KKa?7?w`'D0V8{Ga"b >Cu*s6^*k.w?{1ӌt&}X0q:5Xvڛz7Pڎ"^VU/eՋxYE|)#fJ$b2ȌUUJ҉$5P`%4S2w= EhRr7*t*I}X|üۏㄮqUCG,ic"VދW󠴊ZpG1 1G˻F. 3\|s'T,;'wJG6*r26@* .z\F gGB8$I$ϥLJ@1!Йd:$Baihl} yh_>CԲUeUzY^ ]kZ0(6PTfU L,M!Iw/!KpX=Aז~A@wpvf~r3`)܎09V@ S'׺?r<S*gt:م!/ʎ#Zj1_a-; 1pWJ]zd5K{3k׵[IF^zä, N$5(b%`I`0i/ 0o[nv%4lX !AH`2)M:D$w'aᕷ`}0 {,S !lj(el.9)i8f]MX V 6_D >uh3kA' [6QA"^v g@tX5>4 9Q|fhv{ ՇI |%o,5[SDOTaCMw˧ʨ[c)?Bsfȓ"mF&!!ۍ t2I%~ԪL7G;.ꀮ֦x?\n.JJ :cq0'Pw.Sx.ک|r(gF.G O6c=rIs o$-h4Zgk13 $LHSCpBuԻ )>\h./ZCaCafNe]M~$\-U?92V\`^8L>,A >xc٢cZ+?XME Q,xB0ct?bcZ"-?ht;^trVΟE%AyͲL sgMKg3mV/06zul@i&E$.]2lrJ9f?)+eN7@8'EXk62`?a4÷yf&E+N##.P$,{f.!pz (sJ/ V@z \q80u5D|Reh`fDМ7 2Ԁnrh|}Guj#rѼuR*"d7;x`C3x,iPk㞈*4f/_bXU[IEZ@_Rq6O+upXX[x?#,4 ?0o񬞌T}K̉y7t~·͋^^n墝 c~<ۚ3;Mlb<r?r?_Uׯ^'VgqՎvk:Mg{ ,\1µ"Fx|>_K5~Rg|/)L׋m7DF#'7uR;O+ 7S l[ \(0ܱ(k()3)R{0)<3ґqj`TC'O)Sw??n[V]ϢkUpgSNV&B=u9?y5kE  F=ID8 DȣOhmz e Cu&8w<hG+F RTH)'LD9WRdp9#u%f%h2=/tBGy cw 3=?̋?5O悒!׋(_{80ZGCjZ~%@E˕n]]ʕPjFm^ R= ޫ>xMDP\i=(h1ypZZ(QDd vE碄f7;]^1Of=Y.Ԇ.8Q f/x#~4I(eorI kWCS=Yn|7ۡL_ aqDXƓٳB%}V{q_W[w.i觀ڪfٽҲ 61^̑EN)1F`̾_7 uNr~>x?95lXOx^q o(&B'uB:xϰh{#xpB7i!Q;O p +<hν^oSQ?%tv?;H:ˊC"oƛ(<ٕ4 #.B!+ \8jzɓH&XQ3L]k糄ڮQPNy 疞F |>l דF+e''lۡe49C{15 ^/hd|fHqACe(R,Qxe? gSdh0.iA%QB]Bx;h+I>8q rp=-P!lcnD/U"$pi\d7R9,ؙ=@t'1EnK \B 8MnI6$`}1tsx#7v=_O:]wpZ S8q Q9VEG bg[ٹS=E{EЧmqKbIo~ ˏaQfu SNfB.K rqLbJ 7v;_ozQJc]ut-I5-4N۸uwW* 1z"-Z},i*^l\r=ī/TRtCV4QL+][˺l:;T : ~XZh 6Zj{(C?|=;sH4NJ#2PgG*y3Ger-" 7[s+[c!5}ōhd wCS9t=ek6Ԡ9BM^(JwJR=dkmSsCw.} ph˕%E8E^dCY>ԶYfm?ϖtق|Nb%AW shBFv+ϐj+\xhcqF/hf(`kT8C ?IH 9N)cPcXz: #Y|}ɫ>Ol7< b~nxyRJZEY -klxLv)B.Qɩ{ |)kFmoЍV?Y4ҕ`x;̊wc,|jSYa%'c|aVGqn܃b /2G3ӵ1ḺpjFe}KK ds^14:Υ]d, $h ւamMX Hx6b4 )%s4_ۇp+h.;z}v: ljdxAmd ӷit)r>ըj뻃ƫ1 Axos.x}px [˷#οh̍~Z,JyQksrL7#%_yR|5y;6vؕ]ßZ ;iMeᬀ϶i~iXЬÓX'Jj22Q%4T&2&,yFkXP՞o=vsz:]̆. nE5A GAD+3\R#㇟1gcI$U4![G6s&v%>dve|bl ZdP |p&6[nȩ^Ͽ3Zd \Dl0#}/ *S-8*]44dNu3Q.X4o R؈g+7^0 *AVt8;\ԲZ&r8aKa4/{n7!hṖU&fʥor8MPDB{u+cp}5(pbuNfiNVG_FXn( k2F?wġG~6QτfV뗫b˿|; R85G4,?:5RIg~e)#mR$IL$!XM[a-RfRs) [cZJ c nRALR A% :ͨ"3)qgX /%NhR8 m5E&(~CCPpC"*ӪC@ZX#/?|^^@|R>\܆Ņ: bq :5E4&Bsn%8>Zt;(#(*ah0#32%ʸdfb(DFg ?A8XdL,AS6x |B+yh12u!aX?ic(``S2oJLXVfM PsbRVVA!3V9KLY(5(EpcDr-s2iM2D[&[7,! cΌәJ !ebK8JLj%LHj-OFl2al11D')./Lg`D=׿9O-ȧe]EY@fOp7s䮘_\(9Fg3t&.lko7a2 ;־sQ>2 g治Y[œ,6 {%]ic1*oW\d,mr\ޯ쾆4 U/v^OW_>{ߍ%&cr8 ۲$ ^N˲8PT-ɹ[T旷ͯ5B=IR+%qd聺AzuQI@i xpL[ɚ[,y0_2O5vfB*t~D_jq)}cچq+8E|~ <-69)x̵k9$M,cx;B8Γ]$Ռ+DSsIY &cI I7] $̘MGWe>UhUo,܌٢-S}>h:ۮ/ 1ʧv&z~;hJ3bf^n#\;B!U搡 d pb gVهY;1 -[ae'෶pdbmrfYlpI2.%dYTaXolsp^͐i͆8+EnKڨlT#ѱnUiZ-L O `)C!R7o4g`߾T$bX|fж<ƌʇE_[N%U.b#y '+w!E[%߭pmӇ^ }*s\uv菷¨N8+g- бc^45O,Cz9Z뮟"Żɤ(g~ј:hL+ N71^RX CF˘KbS)Y6c1KX=7Z1f52z굵wnU-wAIݫZPVwRrz:mZ>׵`[#kEX5թ-_1Ojwxu62NԵuNO'4:Q+^Ryf=>X- 0*}C0Rbr(ZR%Zk ?JûQqIz;ĥDQ-+6UU|Hnszu6BJNSq;"n}ysNtfrZCy[&E%ӥp [^WEAgIӬ3ͤWx;hAj.~|;Dy4tcpoNasq QqS4ɢ۹.WmX\ǛpeIČr&&Q\u" Ch6+%$@oF}fԀ } 6}1ќK=D!CQdY,Uhc/Ng?N FuRGuNi ›{f.D/qY; $Ӓ=Ӷa=^nT ,Mᒫ+1|H?̵A4BQ^Nk ]I јW8A/8? B~_B!I0M w mQwW G%(:dׇʬ9~uTyM(xtrxJ>"ib.+0[4-לOn0Ya~&%sOwT / P#0;p,Rb,# O㌍D0l}Md4hxI%ʹV>M(xg߃wVk $_$MT:֭)L#RBX";[(ҔSN{>T}} [.ȎG3MjA&o}CR:$LSA/Sk$Lŕ߯7 -PoӷyNHBӖZ!t$,@P̔ND"P <؋ܚe@qSDdV>pݰ)+.hnͷz|̼4lIIXP]v=.ARr7}A"Ux'"ҽPkqUۚVJ۝4۫ak[Ank(wN ;h| Ƹ-#14N6Tv-KkÄhrϪ 'w3.}Zo:%_b bWof:O`餭 HZ  ӧ>q ؖzJ$WzXGÓL=%U+a(cBoy MUnj>9iۼ~lo[0vS+̣s. *>M?(^;h`ų?IL9kJގʬڟWG=\f~kbĴ n3įu2rβr@%oL7˻E/l Xh %Fw*J0͹"+H"E6R_7YA/ %ުX$X`y8ڊufd;^?HsQz3. 8GjE}U*݄r0 VQ6[X q'wggN)ִSm0idvֺq/7Hn)6hM&iܵUj7/xwz|:qb0JGG ddH<c6ATwڝG_[6iZC#@ģB5uoEv3CjߠQ}M G | ) -j!aoO9_ ݽeAȣZcһ{tXc:YƳ,t9Ԅ3O,$kryyeiʃ1{j~1ݭ3XWH_31tipwî}1cNK J|tNY״|8v#-#rnǩS~5m\'̞-r'zZb˳Ih <<=t=ô =oGq Z v yttͷo4maktq FM.P,F.BV(&Kmo9)'iap\sͻ/罓_N}',k$szx׬ssۮݠk}@B!(et !(kђF,clY, 6欔G=R8Ql=lܑݫٿʞ#:GbOlLJNeԧJHHG!u\nFe@n fO&/ 3V ZTVmyr5?_"yfgOɟ9OOFT\W_nJߦq$ڢƅ -}5ԁH۳-?8_f~)ۚlLz~yЏOc\#7QWv0:n}l56N/Mk8ʬ D '%v(EٱH}̞B]:%P::P Wh]J.iyI}$Kz>"{`!2EMEȵj*BnrŔdnh{27fQ]Jtk3S BMVQo@+Rw*!DŽE @3dMk +`}]rRhLGT-zLiwsgpCL+5 Z&gvBM"GZIa T2"Ԥ֠)\WhQ%`>=\epTLP̀|Ou ܉9XLc0Rg4 rV+Go,0a6Y URH˪̼tb"AFFx[Z&:dsHBxm*tf:$}xc\i.iv8 n!q`.ͼ)⽝RQ2]fR Ym1+cUE|@‰5$lA "U 36bȢFڳ[L3/=>5B*Ug%kXۻ<&l`#U(Y#㎄=0i9g5n1_YΔ8PHA=_ IoGQotйzrȜ\Monq<tasm =%Es<|̾:GJExZBhsGSSn-_}Sܿ箻sh>1||PA( .jh)Ycm@(82*Pi~3Rk='to/:}>q| rEiEM/Am&b),/'%S_ffk_HM8 /{_gc:;7==àpjkVZIk˘=B5 W Km,`؍ WI"+tJ|yϑW 1/zh=HCd]X) dt)Y֬d(/ rLe1}$Egdd*\3TH6NgE|"&j6wF"3x#K@2NȌ2S"\$#!Ιa6+3lujeOՓTu4:SjV-nåSu+ *lJygS2ţ`r)ESD6 P&F˧^ ,Kvx+;g} "/Β&I#/ZHihH;f\}QiEvǚ/wI-V/0y5<_[\[{+_S`y66@nnٲČ=|LQ\Xhku2*7Qa'^gAjR^  /[1IW3X]ğ\}Su9ryH"%a3 JJQ>4,"i` d.z'LZX%FWI9:eC i,cpÒh1I W*)V du,Q]* hdS7\ߘxt{’py\/s>'[-ӁM>_"|XpwvGGCTSzxֺtUwwb,r%vumvoEMv[ZJ]|X k}_y$WҘXP)t&c!"Kba}jfD,y:Pe[ jx5d5,7Jswt)-8,XZimγ 58Q}_ xTc _t6F{2UC)|O dJ]4&2;-Q!MQـB NᔏFH>m)y.Oq "LeCFg[kA˛y1e֤CW5pbSL-﬍naZm]kʥ1dC#AK˶t rR6J{#ZTl) edH tsADki(^TX i TѢXO-͐C|`)X3g72ĂXcM8 !1S}dw&I-F+}W%4r=m7Jc_iiw8Aog J%TURԶtDe%d9!y!?ZuG&^ rK3*{zM|s Y;.WMj"Yc0v%g3u3=9wa?%n̸is&po+ǒ1 zf~rߌ`o lpnfꎞÛ?,L,"8X|xdNe#?NFjF>/8oC#)^)v,ؒ<{JĸHtgR\0'L`z?( ~&s(tT87r(Y]~7/TWmC$vq=!t+_ypbtR~K/C/=*¸R񪞕aُo봀.v6smܰ}T'vԟtU^ڊ hx~Q Tƀ?Sa?5tʖGuӱZ#Ϥ~] ࡗ^\?3ZI兊Z`PMe8>=ЕWF(.ρ ߬hplg0q0Ό) ]LE񑽾Nx(.>{A|>NjH?}KB٠m6yiB"b<}rLr]A_S_)۶lrI",d*O!nAPfNr2j a@mPEX?h<66H-QM9 $hruI0CG~DcDllhy .zSb\8UMit4&VJP3]Ƙgi3/ݣkRQ(e(̙k*D_C2j&T y=u/A*C;*<6cGC_K)Rܚg`Q«li}9Mŏ O z>Io;;GuřK?"F:]P<3:c1Gz["bKD!{:$+^ag!<&xs(?;P&r#k=聸qw?yqțϞ΍iul, i뻭>÷gbY 'z6@Uv8!KIѣdZ*lR6rXP'H^e͞kdD[3S{_pD9l%9M`eܳrX Wc]%9bDپAvGhq`"WZ(:"4gr۲i:r W)Kv|bHWˈsB &~/ymg:6w=dĈ*<%*.$ n,'*WsۀA|RQ$ ߼:#TuUrJѩR #):aTy`ΨN6d%zI%\C ԽTtOPIn&.&D[rU=ʹ/4A%l+[n&jsB 83G=GGNr#w* Wd[ԼJ[0Gkqn^5 cH AG=ϺdHa]:= 9sR{#hoZm"ng%V6CH䘀Ӿ HKbA!&3^]h6?$3qlZc5ߏLA^C͞-7thK<-ڊHEwȖ ZApƖl= UMon|WUIPzl[-[I>C$S㗼yĝK0`,0TzmUȉX:I $Y.H͊-1atFVQZ"RaN.ACJ QNTK?`T~w竊[l8Jkԣ&[x_ƥ{&sa{)1 .wz`byxbo>PLmBHn(T1bmZ١9\5CH(Ѣib[.\V{g6^:ЌWT;![\IR(4̨FtQ-2Xkr©oGNږ`g A5f#mmT8HBbлqoy$c}V%@ږ a_k"J\[/r?|B ZPWke!jO J*^M}OQ"E1[ž|byzu7)4,, _G +%`r] CW *#Va/6nlIZCNE2t#yڜDSL2YB% YyS.4)5Y7n6'F,!2E.SMNil$YGTKD$"= *I1_q?E8(F4H&H ͱZlYA@_PBx7T\a{Ga",aT~GU^p|DC@ <|zjlj]V/~ߵ=Xo|j=:zwvnߗטZ a\JxUt_հ܏7S>_X_> (joo3gͲ-So_#6u1USihˀfszW٪7vʌM=wSI{iZ.ՐGHegXY'c w HR]|:?OG1':c.CKY|:UBpgښJr_!egQR)g8\cfN}E3/cܾB5oNv(@;wfJ) ڭn. av͜C$rWhB8+ɡQ$h7j7Y9q)D+YL+qmT8H5sw]|٬0yc^ VhwӾ)PNQ킽s,3mnvQŃT$Tڋl^[7[AF^vc͜J#IQmji{gzy![)'Z}-o٭q?ozkt˜řgU_^ ? ')MEс2(v>ޝhO^v7ZUin9`91{:JڼN,]@s##lTgGĽkP|Rq0_J,1Tټ3ҍ~;&jܻ9c+99hz3D U\3;؝\g= iq $\*i,Lhr7$4{v缈H]z;Ҿ4[Hmwc5Ev|om?޺5a:zR]{"{8>z}H|;] 7b NV / ɫw RgS -$2Y"\)DJ1NQ+*!Kl+a%A1Cq3|0o߳3WzD-Dy/iR 3A04M]Aݲ=5:1H Y`Ech\)\u [4&P}F%6EI4XRﴌC)Ա@QCh-T2)bf>,Q 2$B\lMiGi~ ?aPݨssrk39aWś#9i!NX-B@G~ 'ۊ#_h? 7+ M(N#3=i"q*f`~"j62{~`b!s#-#J+w.+EGрqH,+Kd]S8j*Z[jTM@ B(IŒ1{s- Jvˊ~VrْIyn^l[\4ʻɀPcs)ԑ;Dkת ېi02&qԁK- ؊f _=N h`l6??\l"1缩9"sJH슐7G hvW'^ooKOoR͠|XA ]tY"OM=SgPy˜z>ꬼ z[ UdڅcUp?l[̼=e|>}bK#fbsBJ}O5FE\ruQRç+*KhnQ=]Mͽ^qX9Ou7~T9IH„)DK僊`u/XYHc\b1BVjgWu}rQ[TE=}}xjE)ͣ/e,Z F8[kFWE8reJ6pub8mהnuu<떴o{ |H;?A筢Ň=K6ICrSyg6{W}s Ɲ;ûqur2-M`9J:N&d>`D\xz P.<\4ދCǾ"5B @7ye-|M;*IarI#ZmΘ7/t.|& }30l9Ct;⊎7F7R@nAVEzpJe|{4EnJSNcTD|>$)eiLg|*mePv OOW<䞳7{onw [2 `Tqzp-087,&Td ES=WA{HsתlqQxw0k\[/^>F/(J8uvN\M.6_޼}?LY3b~~= %7Qk 蔍K|&cfC%Noo͖; n5|}ޞ&M]LI>ȿ ^ED M{-܊8R|u\y<;Gabb](}`ɍ{MnТ}AerɳI}F fgR(FcsWyMG=E^"b.tOe4/՟QffD+A u~X5a;Ƴ_/SEFYx27vUoʞF"ᇿ̽Ff5<{=fq25x"C': t:Y!:5xik`| ,|,PMLq5X^'z~@;SfBf*nQi<B7"av?;Ө_T8i󐣏DVh⠝?`O+{Ph4avDt+n'PęhG ^;ʾfAc"^#{c8cF ! Vh9F$]v혹Ѯ6B̑qp+[ݜ}y$IƬ8NkN# sA33W4aBu =u3ͤ@. F~Q` I F:!__ؙ3ǸkԊ3%Fw6U AmV͍*)JkJ%f8H&ÔsY"98RqK3"涵Fu6"{U \CCߍ橥X8MP1':~ofJ+ ftMZV Jғ9d"K((,mxIOwhiJaKBS'XH ;PrsΡg%ҡ!CtuYn 3ll4F C&0s5qmR8UWCn&rGzuHF5_>ǘ?Lfd"S0vyD\ݳ):3#3D#0~Ÿc"3aPhwc.aRPH:aphĸ)JE%lH*f|(7@I0$N]rŸSD5'"%% x %1ӹ&ׄmOD9MDd_0BbW-%(>7j0$[ /1'0㖴۾$;\.y&naU"kjsx\GEțM`w{̐noMl'0yބ NjM aiSWTJ\5AP㶹]nɐ;- }UDlKPmaBM<ý݂@7frX%l gvGWvQ Jg (+3|1&NU3yc-&"2-rgܿ\n[I9T?-9HhgMX]|KM62qdl4cge*D9Z.%/uEVQՉ'HyNǦKPWeiU"g*jټIT}%+hxe:tP/n_[1:[ 6 :0xqs M )2Jܯ;FW]kdd(PH o>PnȮcv؜;/CNFjȺ<"Cvfx3Mhl &4&C 햋>kٞG~%C]aZQBu/ ^'_oKS8 DFSqʖ> 0l |AlH1,&&^~"P>3h_hж25 i1؂VK'Pw?ŏ3ƹRKQ _aݾ6ltb2Zףz~$,ǡH9Eq(RCq6(|iFS2#M)P+(q$A9Z A (] "Vǰ8h%6K!^i&\"->6<y3oJ85lhkax~ushtIT]^dpJtx-\zɛnHmZ~3[|>6tL'f_ ܣ=E# ?c1F1K*1N3l},P7DU1KŌ9 si#0-J#l*0؍ĕF;@e} DHYG𽫪P*'^VB٪Lr'mbҔT!XJ 1 DK "JP ppqrs1dkb=d>2^`n9ӳ[H86lN R/hS>ٵ7=yfQ'b;FdC , .OqYCT RĐ>{䄦vaExa|0 zb>dEߋ'F;PF+_X³䛢JTUP=pm uYޓ"]hFubԒx%첒:IPnŹEZxD`RƯ#b */L&߃uR1򸹘֮.NF߆kWY dI^G6ehΏRd 'Zgu)^3FH#)PcGS.:`18g`3i]l(#QoƩ~Aw2]漢A{5X="cM !*'%S+k4{Pyگ po[Jpgd㷈W4pGw7V6y:ZpU i)]%W z}}} = KPcNQ9:.fde<O;YtDے fH4BU% T@@-eGZ~a v_X+gȏYԜf/tҝ7^ nuY'u,Z_`y\ǜ*HҶ J1@gxF3 yre(xa<{H<պR=߽z<@%-ZXaqrx7N*Hy~4T )mߕ,OT@JR"'HgqęNҶĈ_2|H,nNV4XfcE+BH+W{Œ*),&([؀ZҼ1~aq: Ys e+ZDi4ZH 貗{+<+{O'i1rk"[Rmh+ёxr_ xM[]X->wPZŤ6L ٽNJfvKLXưg d*ohc|ٗ?(c1S_ןf D"opcFYE61"?fh)UiŘYMG4sq{ק-< 9'7c{o?Q M,`T頊5doZu<=N)1+16xFmq%(MϚ-cy.6cQEA+Xx;4n«jK߲r"oew* :f6R#x 6<0d3眳LQe͙2EJ9*¶DY\[yPUY.Qߛ};fC{wN-9H.z[@wk'cm"`RO (0ʑt"탫$#Y%%Aۗm'!Ea?+e)DtEYG[Xk*`P#v!TU`Wƺ(L9Ʋ NT5Z^DD0bCgqi>^BJA?a_VJ&> z׾R>-AFTٺMYx(%BE4+e2ٺrǺ]-֘@9OxNI ԡІ.kXkor`ב[j{xV$PedZn[ըSz1QJP2]vB…ҍq*v|iu,z081>Gttys5c XY7#}Y\FO/$O/鋟= E-kcW;2!yY 士/T7%Fb:֒q`yqt5o^ӞJI^_1$O&9Jed8ߤ/޹zQ_S hfpuk:I1Ht̢;-(]zW>x~B8$qU\9i^gw,q8i]<{H%D{ZeVf=xp[.9y鏅´ޅoSX.ɘ0gWn}!VXר VDzSEZ0-Vs:[loF /=k hۤoCɋ`?:)#7̌~6Eo"oͬlNbІOۄjXn]^ݜ[CdLçzty~pY kc_TKL]uv6_;KF,0痏BC] 0N ;F%' 9Ig-b,`T-W} (Z)MK\YӷevhA/ hEQ b.d$<c2-ѥH9b0tbDK' $XRŽ' VސX]܊1{+eNu9K_lJNoڽm[Dd>ti d0yJ6lJ07l'ڭiu^fW;sٵh@NfÖW n;N3w^UR\U~ⅺ`W]U)9OW ~ay4'w1&xC$J? l,i5dG^"68.AM5 ڀҒ<|eOKy*a*^|.x/!bEj,rpl4 Xomн<:D`bVaZcm'iHc)kĚpwAF?lBu s:cr0EĜPgfMXZjun Ȩ{UΘMq&CNpª'vew"ĔϺaE `^ 3B $ !$IfVjazTӲ3ʫd!#+2ڙF錫0J e J,+4&(E`IZT+Uc4%՛١Ŧ6[R-)u]lo`T.xc#ffl̳/myb*6KY>T&O{H!dF3WMe4A!42; Jl8ڣ.+]RQ ʤ6$VZ%ꃐZV*(XU]j\Xl8mLC8~_RT̟_,֌X%:HGGR*išV$HU p=NK`I|b}3N {YFU߿7^>HͿ%/ IRK[#م9>=V1y9࿱,{5]FFm&PO/)Q"8d@Jy' 2|#iٗ&DEMnT㽩C[iI4oP)Gְ5KO1<ě}FWg= MȧcD}N;H$YS]Ҳ[  ('Nϝv j|MZѕSGI 0"DB0XӜv&ʿBKv7iC?,Ivg1@gSǐ8`ꦨ%V9dT$ER?jVr_SWVqA`X ֕~,fX"AW֭}JT,_TpDTy9X:-uKcݒfݒfgrMPYjdEAp!4AF[J$ANJ/!SMH,'n]9fw]ՇG^9܉ѳ <fz$I#=$o;t*6SK| `Cfoeb`FFo%a N%p7X X5W,9 PaB!n1DfƉ/E Zɍ3gd)>uy7coy9em sOVJU\܋-D>[<>nkȁ8q[aD+J N 1wa_}ێR qm 3>sZǻ_/?ѴrčnG1p;2kyd`cz-ly1Wx(GYx^EP&٤#^_y/D Wn5A2y'+f<?Dcr7ٺΖG,ċSMK5̙_ցYh|K`9_wtx^M5*{>=2xTW}gj%Ǡ wuDj,$&8 b)00 'aϔ%Hli͸"n!$)Tef݌D'.4mJʸZN<&Q7+~sY9`EtSM4ѱج6f_j^|&3{f$p$e.[$Ɉ/&e)|&\L%QWt7т)'N~hU1,& T~6Ԉz^l-cDODtp7Y"ĕc9'蓜{Mt }sD*MT (+k& LIy/8xq9{E?o]N[-ONe#$%hZEȍD6xocɭ/b( lG3-x_bp{>go[сT]\+JN/J'|R)A)4ksvL٠T59~8\T?Xon=\Gr_.aR!F~1wzf#83sso~?>u:5@3L22}4֊]IYp]&7~q%xlbm%,w` w(U6[<>7"jAW|.PjVF(<Mzx]h{K>VA)p++[! -vj{|w5/}{+~l ? Wun0&\H_*bn|xJ0ndlJFlT/pa(Fڬl4&}sw?{cczԗ4:hO_\\dmB́q/m-3} JE#baiEl2Ҋ&dDopJu4nݗy'[[/ G-rE<]w߽}|7Z&lϏoc8|2fG҈©Lgx<EgULgg: ܴ[M1.DRA\ߦ樂3 V0ݰΤ,U-'Jcs'zĀے$,aPұb.e7,y(vம,)9,IkwWrmzMcO2 'y0l`^B/w`P`swfo @NY"5X~('K*)KFeuT%HiPKH#,9 *u zv +Kl>[/ @ Q\8Z( 9/"}F!?x^P  E`Z–A`F{8' NmHbZ;̃fXtL]1b/Ȕ1+l J衻;I$8V5ө}őj^dHq ek_Y]OBB60V|jH>1z' N)?6夭 ީ{I 1Æ$Ljɧ %/U` o;hk-(9\Kwu2q!$Su*ƚ8VP\Ņ 8 JR\$BrE=H":n>Y,KBΖ9Q! $ Ei"U>n8l"|3z:^\7g}1eSDe{R$AJ 03ޖ2$̝(/f{] '׈UuLpOt}Ft(EN:A;zn T3P*6hY>J>d^#U hh>j;,8}Lpċ:+""Z:ђ*&qW1 R 0ŊR*S.h`ƽĸdRg8.!) |Kwg$!FW 1+.t\w{{9=<q[ !Ӣڲͱ)ZtKl OCRS;ԨH3Zj 825J>q+%B3@D;N)+I)D m )% R H[i.K!ae ʡn@6w\fIIŒAÒ* XYBJvqp9Zc#tI*Tx<" FsBrb$$Ebu^@<4[k %r? *ntͤ<̤N@9vJѣډ'N.xƖT؉Ӏ@6s pu+pH-vc*#躾u#(@!>G-}>pGg/lfCi;?Ly|UR%n^EWh]nWs,Wx+:Ѝv1e)F@*V(aN}kx1,x,=Q4J)$~Z(G (uoׅt":uJl4Kv.w3m^Ew+YtdYu?1I8;XCb\bcQ5IKo~lIy6|F+RF FOzQ2~` ڎO[{4ۈۈ$ F9sBNg`|⃷#g"k.9-`)hl+gAGd|s^HB[z㉃׽~FuDIn;/ oz䵏p}ZCh64{wYd*JXT r=B{K! 8V8RIgl`TF+PƌY1 BvgP5cRO3M\ĥŲ)%0iA4ӂ^Cq{] K@ВqU1ik]hI5!^V}H{!}ꩻ NPXS!E)R 4p"o h]QwUJO_뛵Rqu#qsn>8[,D&Yj::-vS& T<1p}nxƟ#bz@NGno@v85M4q@9ۀ'T{}M(tSI'!c UiW=na>5t~"nX>'nV8s&USBIfU'ZkQ!q48&0ZXt 4up|Fp5)oޘ3{xn՛ҽXɘx#q  :`$@ u#i؅ --`'TZo w[y%[jE4{ 9l^<6%Q.2%Gq|ܣDM+wCQԃ/S?g]ђDBFhW% QL) [8e3uJWI|c@10PuXm[5K۟lj% '&5$!eWs;(pd>U:0eӽ0ye\FtZ"_诸诸n+. %L3%TSINkɄuƊWZV w~MJ I(u.G+D. B@kH[-%@8U^+[ aN c%RR**,Ke8>ww#3+q}nHMɺjEQ=j;X:RV|Zp) %0^lTd b-,_"PԱ8dq4n*Z"d]|ZUTI=jst ctPH"jQu ^ )C/T $K>I2%ApO/ZNHmpScz*))mng]k ~0J1F/k2ex5#3`uLBz p0|rHlrIpm~"=F 2"P,bBy fhJQnMQ-liKqk(}s%@;իn,K}hE^}x O1P`yggVt Quj;ιF䐿v|gKeAoD<*j6D{BLTǛ !%W޵m,EsLyEq MOqrtȒ#q %(^-;D֢(r7o76a)pghoFd+A"<b VQlwJ`J6)rz`}3y+"(f2p\oiÈamcÈf6hdRjH0u|H8D.Ә%] Qr=HdI~޵V*wζַ3mW;{Jj6ҵtGš;5n6@vi*y JY$AX0X,b#"Ҙijt[>ޒSlM N8zO/dqdh2L5c+\$8fKb)P.*Fώ_CLu+w^On{MJPL)Җ5( yȶӊ&wkf&ldZ .o_JIZBj2ɐj$C@BF ;aE[ ndH7Z|@X+C p``?K*¹|(JGrٽ_F8&׻^EK կc^05U]괦 &ZSե.O ~u[բ`^Ё默vbX?]!+Ƴk29&~(sDRRzcw~Z1J{`"EmVz` aKR m3jbi{ Z;sgKՠ.lC&3P]Av&z,* •A1\m酳0L o.hl'`u8Q a*>8IkqL0ѩű4M018Ʃsgp/ĥ3ѷm`>$OKPU[2%cs޷^X "q,~wvU.L|*ɆǒߘĠ1Y'yJ*&haE䍚.C+Ic6C;a%B+\l PV.@9E#w'发22%%U4 @$_8ŭeٲԥֲֲ!HAޫ,R8iH* t£/L $K.{W J;J%lg!h%Mo,I%ѤOJR!D#6+c%JtR0>9Ѳa*(F$@XہZFuRbѐ?VJjCP"zئdz.^)Pؽί6.μ֨9zHpi3F9xqfG&B`)A6H7~ԕƏ!OߦcC H|0e*T ,=FYWڛ$ڛ?!UD{O  Jk1LtcH:# oR;߆e%Z,?n\uG.`&^x+jMn!l/ϠrT$ "}P_ Uka5_ `#cs8GL0ޟ'^f̍O\\7{pS6&0sXߎ_ >U>x屖qo*9}]dw/G W?¿tv=_|;.V|7>~-nF^jcwOT8*/M /yn}Y/G쳛?߲4bS\̳y+xV:ꋏz1jiP ,WiiIPvQYgW f`]dM̢b|2Y|yLs:ٛ޳>/[ʅͥO0ޟ^|x=>*>wI1/՗W~bVWg?#}c7Ϝ3};W3^xiW퀟Iz\L"^^b8*_v(2eWL?}ӿ@5 ʧP&H: WS..=-dՏ&?Qq%U^h,}̦48YUh'>+}t>RSCu\D9iX(Ilwscs5H,X,JE3P~S؍Kߴ-:2S3]^§Bɾ+JH)nw8Uc-c>mwGkΙd#,l~&0I$ -Td%ے{+Ie2]1;vjAJZuZ)y¿qB`M<$B CKn4_=ߪs}ّߋzW&DL%)!n"I#Okɍ&Y$dTjuVSt@[Mz"ڭvjڼմK5ߔ& U:bTH9d"88u8j-L[MMS i#9!%b%[[%%/<%[[%kْlzi-0W8UiDbGƑ!"Y4}Z ֖ %s)k*mlq|lG=0h+6?1 sP5*M;˫|@޽{ 9@V'A[Z'HP2roUZ0Zj]<`w~<޹~N厖Og8uQ n}>Ⱥ;3L%ҜX%(FRt+b X1 Smyj wȥDiqBcU?¼o";5$ˍ2)tbLM KdL>jw,i weK`rwB/wz1D" y o*ޚ !6{`G#'v_YRDoimc=fۘhmcckz}Bإ67TcJ1(֥$0eCHLmHtwZTHIi8*/i{Nףf)v7rv??@5lr|ϑ?9gr$%I`G clbg;_/Μ`LR$Th,8E")M_5R) R%'N?%| 3_B\B̩i̩j%9T)6`VXQX,)4Z[lWM%Ԧ/jĩM_^!}`tκD1 Nw (*ka5W E]QhZ`d*MD߱/r,+ ~lwPO1[ '8sۆ<<8g6܆ې]!EߟFrpS&EU*r`c i({:˯Dԡ 8D*S `u0D#ISpƤ\P5 sVqDPD>-epg1W@Bu-l2rQ9k`|t8@1Jjyo}\ ]mK*mw dX W~Q!Yk%om z@_?Ǜz$TAGLpaXZ~ Ƣ s) `%¨!e);;W*hOVdkEnaEZ+"kM=D)Iym"MJ'(Ys L֎ ;’ a*i/}].RlGYMT~Y^!;/mk^mG-;5AƤ|pn#B$OSA)$1(bY86"B.V!JWv >޾NIӄRmӟSc&Wb7;!P)N"yo\wiFu[J 2(GER%XM᱕T ALrje,0FX̏ÝtY eIR@#7mֈf4SM!YqhrtavxM.P.C@4fb *Ё֕ .)}YSF]*.2樏/R\flRsk̒KdO)j`;<X>pjOY9ѷZPA"W@"#J -g46Njg%ZR`U:B4)QL"E#qŧmT:k q%IH%1%%Jr n 2D]g>.*g}9kǙݫ8g$L$WEl X /ȋf7(ނ<=":Ѣ:}_6Ox],gl P/O~ x ~?M@"\եHq Ÿ9q3"RR]I .՜k$2A)(5"q9L n5Q(M`0E,BԥhEi D!W! m$1S ܈xAJ #]W=Q8X1\ QTOi 5KyUc~1#OD[ c 6 ,}qhNbC`YkL`LC"̍LxE6kʰ :op2ϯ dl僧LL j̪B t4`\La*Q,`\`1h[v/lZ(/J !]M4.Yp&~:Gw(тFfAHiѿoFbх3w|l>RK4ti0GI6 $4ϟ|Ths r "Pm [gF? Ktūoق8eM \U lnu0}y6X|fGǝdM lD3RjV$є7q~k1;«c(6S2panc91 Dx'|KeMI /w[.-G~YisU=N r6W\Ƴk49.xVQ]|N|ɠFa]HLd\GHTTX4AIb$*Sx/Y . W[ h?{Wȑ{wf35cO<FnºVzE,EY<$ntK<2";)EKmh+ɅMH"y.WI8XPr$#y?0T'm63nv)7K`^쇇b-aXm ~#5*D\i"20h檠DqS.jN z/!"*R/تm²*\kB4-E   [8YOd&0$4BmXyI|0jh6Vۦ,@#^PԪ2L!D*}eM!& 4Z*¼E`[t! =A]Q+mQ17K`$~`Ԧ?2 eO)MD:f 8ɣp%J33Í&" /gmaԆw`Tq A%0jusF`$C!2h1AH9XcFEy(oC*LḐ.붡%(B .grհM%+PK8OQQ%u$R}NbkK ӎp$Zq@+㣇IwVo69 ZqJE,|$f>iE,QOLG0cqzNcZɑ։kb Ip9UT&`fUUg0y$zk]^)(kq$'S"ėK.jw eYn%0 #p\CQX6#XO *|0 _0 1jLT2&¥Ghboeu !)LP|ORnM>y3Xb^FqTapNsGGfQrT>eHLhdz41.r$É*Ricc#'ۄ`lM 9lZ|@Uދ 6HH^ HP|XaEq+z_Jz[ L ED(/+"?mӘZ(3á% E0ތn:`/w‘t=ML&s3Y0&a$Q{8+v`\oo zr@$4_=ZQ7'wBn ݴ0d[ iT͚M!!3 FȬ_;ma :"fs`8?Ld<48m Coi<2` -%"o >jF%y@<bl @bys6) )W zzYNc|젔jy_rU T愁FONQ 5U A#4Dm4)Rw4QIj6eh.euT>0>]S h> o]eж5+/,_XͷkX}|K76S%UQoоXB+fox)X>3k2k 8ˮXeYAv{,> ObSK'icťXnW0Apv7m >}p(7O C$*ČJDQ;W'px$T`ugo y>x7hmtm׾OmՉ~ + =n<_i@i/#^JSB'.(\Pă\2]ZqGrk8u&aYͺγF'VJR"6J>oOǟ/Jn `IUA!. bBD1ʣrDEr?1\kN_߮:bxunyV/;9m8;;+}6kK>oYZ[o ۪^%U᫋濋yixOpW/>cQsq\!xr2V5TE+M}L&F ijnY96O5Njn?8D0{5H[)@]/NY^-[CS ǧq[7f)XRN3XcfEۺֺ?8Dwǔݬʇfy{ ?O;q]C w[I㇫ka?g ⷇g n̍㛡$8i{wx_V>۲ 0&לګk>PsYEVn\|r]-!% k+,ZY\2Y~ok1m2b i p55-ẻGJ V3x#ed"FUZGat2$<Tp[ 1.MLd #=>+Gs `MGi $'O+_ G7ъ^T>c՜1Ocw|hP:w:.`9$]"$Er'Q\(j+dpQ$m8(+e/E/")whTbQ>E ,QѩnN)ڡI6Ő9NTD84NGaDs-k"p[ =r _u A|c̫$b._YK馾qu^ӌ9}ÏxuuoDm3Xҷ"E(;R. pRZJBڙAI֫(U$YlKBHj;H`ȡU޳OFvʝaIX %DߔN BޫiO{i2 Rr7@QRk#\V$ZDsANMj&(,.1C~`doKK `*O1.z$ eBP1"OjMX!}bB2COHY \H0RCqY+(BujI?˜1j'Mn;o͜Zx 5~.@w'ø1]A"T?Q빻j2-TU*F\a[ŝ|&ژ!S?mTwLɠV_q2@0bb(Hj mrb} i@qBؖ ^M"JLL%'i1$UwI}W(/V>Nw!_%(dܣjcT{]FբCAbázDU "ӖSmiۊ꾺;j`R]sW!"U'qb&{Ҋ v7ԊMFI3^SvKb<)cn^fٷUEV@dL^7_wzeT'\_bWx5O6B}Tw4%JZ<#{_Gjk[z*yѵ1hsցO@7$gsj'72e?P9$,QcJ Ag2HO.!4 x 3%?V/ódU9r N郆g* ͵yJU>de\~43<˵=SxM\*|_gUfߡAA-8܍9 JC%o\~7E]nԎ\3ߝq3 05?)řqUj*8;]Ԙ%L 653BEVjpt3k{_?~bNs8}L>?KwG 7ͧp;8oS}S#c/x|Q }p|`'~9߶cRnxhw, nG;Q$H 0F=o㏖I5ULȴ58I(GqjS|QNBN-bg-b'T?uqB.Uٞ݌tza/`~v9 l/8!qU_ҷQ @҅p)\8aG+RP0x9), kIA{Cu1E9TIQ eѹ=ѷ ꝅVQ6\@bڶ}l,Qr!}UWrIHԼNQ*+;d=5aD3Czvg1$<]o.?K|qJ&bY_7e˧^>h-:y/I)FuAn9Ly/HÏA~yv ksC"Ȟv&5'; #nSgɯKwgw:כ[̑=zj9یp9{⑩|^cQq"GLz/,%^j?FMp( V#Vt YKn֒d3O$0UlQ%+!yM8pt0ׄ' KOP '>\o;Hyu۞mЉٻ6WXT~~$˛U%Uw?drs|UT9 $Hj3x4UEqwz7ѧ~ZTlǓ{z=y5rt Ng/~|;[J3)ԀnmScmEŠa j9/5.깳-֬ɺ^S ;T \mǰ>ë$WT,X -:q294iK$? &%;DdZfIsrԝ\ZwPdYuqq0#(쮦|"kȺ_[}Sdm*I C]+u*GL'Wo||MK2D~M$ިa*^\k̏xyLυ:"ϿAFao;a֗sw t7A,D̵(쬏PApǴcӗ{;>;SeFZקVKk2}|6J5,beK=. dc<(eV,(瘒ך{RKbASyUvӗ+o26RƵюkwrɁ4}klv>` d [~ 1%(K% KHƭgjrWm[ ٿ~䩋]yRԧC/yt+ ?(X)AD=X%ɟ[t;OqnB}u+'Jr F(.sBAz<'I"$d IZ8Hy*6BjLw{z$5izLBZyg\L/fgu5P,Ȝc&Wq hEO|s9NysE]knf}bQ* mǸꇧ<_?Ύ_mh}GdZz1?/r?szzS;S.p7hR݌6ǀnmuct;\g6 ZOtStC>smSi9 &vmpu&lʹօ|.ʧ.vЭ6qclnGP{CݺE[#q>F71[[l:mطRօ|.§ c,QT-uo9Vow^ƕi<>YjrN>\liϮ/d?yhŹgWc_ZPJ9ԴXnIswa,.\'XHBC}Hkߺ_ H]}d3qeye( C ]a腅58!DѸ3c7D㘇1q(}y~~ ~24mi#<_i9X&Xg{7Q#Ч@tT} #2}>m%7|ABDv}TQT YLllHLgыH!Ƨ 4 "Y˱t%ʵFF@|D@lÎcb#)!)}zQ/.ry[!ea#]rGt_7S>qSM|8W;rc>ꚽɻ_fu8+㋥^]NƔV:Znl6c3?z>rkc&Y2 chwN×'rq6s_o|rkN 1G%OE"^]7753e~zp:20ëɓ);[ psidzKmG\Nx0K']y}<8A.fQ:'SHh!Z3wk(SR?PEe"cͳA͜|f8b|@ܑOYbr$4D!!]',m.u^2hPF~L9kU CA&}.F)2r|)) DͻV@rL: PD* y +n.UORw`PbY#@M02]Mp,%m|U( m+LED _ nx 2 .Hq'N5K4 }]()N AR1(D^ywʺ@J"@9&&]2D^ QȨ@Hc#E" iRXZ,HA _6+1:FkX77u[7V@Z KmBI #PJNs}?%`IItTʁwT[JG,^bE0+h`>\yLBS;LʠNEZʃ I9-2] lPOL皅%uQQX$/j҂XAq 2q4tnLt*# Q^G+"$ pnQM uY'aOoV8u{}ݭ.\?;<3zrXW4և0#.9pi@/:sP6Db&: ˸` @!JE5u{L&(p!xYBh D;1-, 9H'[;*vJڬSrF,J}Wܔ@ 5.XsZDjS<2 gKs t  3gh檣 km#El/HA3Aw4ɲՑ%GsA=,V"JU*L7X*~xH"X8@wFbG=p3 y0ku-@sS6cU;Vk(ca:*6#ƉRIt ؟Xjb@48;ȸ )hHмkN 0;܂Z75aU`}Y`%4e!E`XL 4Uԅ?aUtd%ŠtK ?҇ڸn&"Gsh;m (,(aWc?ۉpaMtk|3xhlB n8T{Nl-\o6i'fi퓖=K̶BZeRށ۽5ymeQB.ہxB{Ww૵̶wjC8ވ ѐ Fm(2ooFT4&ooFlc%y{㰶7>V U QTӞ3ת ӕvI)ς̀4Ca'qk"Skх# J0ZTDNBN{'H{Ӧ=PJ 0$(DOJ 0Rc& IVBg!1 X3BH^]gruukcA;;|.񅜼P!Мt*ٞ[^%-Z(C(*PiP! چBB Bpx_Rn 2*eAzATC(zB9(:h^LWQaɜp1RHB8-!vDUTdryjƅS橕:7'7O(1aVFmI L^n:DodJ4zI[hJ^N!\IB 1_e  w*hׯ)M"kt&r.y#i7v._M㍙l>m$reO w<ǔʞn4>w/GC4vmwhCXmp>{x5 nEӖV_}|;~5VmpN &oũV@߮>f1_<,a֮- gR:< ЧRʕkQ@ OAJ-ZA >A-.6#Sѡ|mQD'Cg<7.@;uMoʤ@U2ɫ$H|!NuP?SWy(n^3wDF<Ue ,w[iFJI-/ ª0g&ve9%-)TP'{87 ^.=ǹ=Oh?؟NY :LBQ}Ck 0jdt>bl؆38UV@mWXUȔ`WU6# [jBghoڨXoz3LIB voR*J|g]rBR&ZBvJٺB36 Mx|\;k(9 h [HT EXRD9,9B T zӠHJ:mFV>@3*)~G- eL7X`8.Ct5'V(0ʮ'wn}NZƊ`ִ`E^HLցlvN((,O483LR[aUP09Y:th (R myZRXM|pؽO}h|gմcvUveC䙦(=jCۀo֗ߎJ?^7^1FV>[6u,Kcc=Mrn37_Qb"E/һ &΍ Kj[XʩNPJ'bؔr1g[n֩;[^rzQKua\̢<.??ӳ_^.FwԤ&a_aʋ\ˇ5 V1±)/hwuz[Msd)r݁|edmDor- Eն `81p$*Ob:TqӎË\Xð!տv>K=zƦ2z8? Ž6vY`P7m'1An[uO\4e,UfY.=A P,7b16n,+o^D\LѽaܝZD̀57]+U=7%+>uBR.vCiofrIKjs_HW90Za)~;]yHr9ΈS;yu+frblpz)vXܸqՃuKΞ3ԏLQ@,|0NOs|d8!/?]N?3"7bq>Wط6~㷪pR uY>HUB-d;s)~?\" ZvȽk/j9DR5run'8G@Sv$*{/rVOi5 4!Ԩ~o1~Fث7^[ 7g2&~Ǜdj-tVs%$GmGƂ?0cNh,ކ+ kCZ4QEUטyҖgԘDXKvZB\* 0ʯ3%"ɎU7c΅Rp%w:\77t#6)~tȇ5[-\ެ 2mhmy: ތLXa`ҕ)6Vr/j&(I{o^6hB3!\GOnGI8ZO7.&oZfgRМ|%p ط[ħM  5)EO|jN5KjPB/\M/6FQz?1Za+w3ú 8`=g4֓q&,>RxR;_:ׁ(nˠiTv3)wbKyuفT@c$J6$s0iВ;MྐྵYhՂōjOTqVD}CZՆpg5Ӌ|=&굞/?_zQ/>~z0?Mܱ: _arBd+ 3_ߌGtZoϦz3a}=wDȫgfvg[|3GUGs0s}zt/L˪&\1}kwS|#{ ?D1]w(Zadw?ϻhY:5cQmjoh{NchoOKb 63J@. *TU1z:ԟhTwT>+T`gR*wr|;:XHٱq4ɈlzlǼ+wЄf GKŽ3R;Ҝj`h<4vRa;o[Lu cmDLNu$yhudJ% [h<9H@ڠrn>EQ_,æ[Ntz6l`JRe%'.mX z=rZ+7;)'u#x)S0#WZ0Ih#J@S"(At?!m#/G%2V1ъRki1r5jPr*#)X7u&=tC8ԲY8ZTko;!e1'zuOHSƎ#-w߅wXuZێxN9éګɎĜD9|XݔU eTh:wŭq;PAA8 91]a_r8PiI>;r!b '%1ipq 3ڜ%Q6Lr@NuVMVyaǪ>SL}^S~ve8D{gr=>Ihr8!HA<1J8ZGvX" 7Χ;E8W`,i:PIy7d牛!é~ Pg aOn拕R\ S5&)c̷IaL @ Gg&%<xOpx>%=H]R%ޞ>@_A;4i2opT5褱15f4-ݵ.X'8RcaOxĘ$'N0}n^w͠{NkR+=*@!AXҞKO *%~6WjnzNO4bvGKt8=A/oVX(%$l OhB8T羠xr`)wX?z[c&Fc^k9OH0yQm=˲]r,eqMmm=@S4C[ct<㎋8J%u;WwYgAqժ*؛P._.LRDh窟QkSjqD\I1C)}B*b6$= ֆ:O=J34k~wƜ-nʮ/08 0bM感BG @YJE{,_b80ciaasRt} X՞)I.nfFȘ@U'rZ|(=MZ4. IK{mtNc 5  \ʶpڥpjIP)k:~u=yx+CJmLw =DWz0Î$1 tt|)1@ưMkz+Vڕ_pXj_ivމz"4&q c{#8&THe;Zrfv^xKlu\x%2F0~L2`doV'.׆riBz|q/^ 7o \`MTZ7OT#L4?//~0n:s`n%>MV?9|0&?N:wh$ЪYS|WA֚_gyyge1TGc8ReE'JSf5ՐѬzi;$@`DԪNh曵QW`6MQqV!076pq?_f5pE^ ;#Rjfvъ%H-O4k.6W_zd3(%T +(njWy9i; {8+pϴ!)lRm(},iU7U5fO7ȼݚ#ϧZwo06Mr|}1۫}e:_=gHf$:2ղӸ3[SSSF#\HͧG#1NMXbWFefd=G❲Df{s$.Io7gIJa>y~[z|s ]VlWJD ph Rk6<vj:kZvl+/G/높,/Ӥ=;ZkNϡǣAgƓYXg[YsS26$bݗt!ݤnNxj7'wYswcs>?/~d9 Q-ӝÐ $,sE]VE}pc2}Pq(ck˱U1~ARfbNc)^X3r4uDnLԶ։CǽlX^ $Ǜ\7i4wm͍ZO ExR\m^NqܼsiNH%LHPA @dQ54),WfPDtOQ1D\ )VHcK rQ0Y(@)}*V$MKƎb;cHc5[= ZWvhHɩG_.{/>UrhV|qm<'Ta*=K©i"ZD'_'9 \\A$p^VB@C,xNKVqm$ 4lǠ"*E5lRv(zw_Cc\^au|~O_<5ƀgu"ԧSٌ A; ?\_w&_)c=k[Sj I"% {Xw3FH{yim@U2c0WvF I FH7m.,Sŏ/_|ijnM)?dl$ZK+eVdrT)٬Ad 4gN<1FLid1Tj(g}=H>g{Ҙ<Б!~{Fo9օ}zyCwh} Qߵ:6nj+;V  o*G 2CdD;(Y0))9cPl 4\RiNl#;`#m{ƶͷ\g4 wڕUf86*8򎯜̻P4r_]kd(F"R &"6HU@-~VOYl~]sW=;i0v0$BbH Ks$Y1#",$Owʽij$F$^THr Ħ/ΐowm,MmE?VAh}oA͹2ʶݻmض(gKk q$ll;Tq(v(c[Z_f;zS"?0eWkw4?Si~Kj-8G.`p#zcg},}?hYher[=Z 1Nw GW"WHfi[+uI!(IFd`)/и:mE$? YaR)(Aq1X8( $x*`[51 "94$;j((gcxi!VX\ V A`mvXFfI@% HqZmIBqQq\B+![ZPrLҨV(L&ik̈́"e -3P1N2c`jXa̓~cơ<&f`4%'#"!{|c召H퇑~D Ya!#N4ny9ogXoRdXք"biͅcȉ»=gڳ<ǥg5sϗڡGYG1dfxoBN7O˔bU [#=>D̸uۿ19 8 q75>nJηKb ]^z^ߝ`KSGK,jhEvׇq6'8ΑsrSz覜%iJ;y|cGNdny9'Gĥh$D%g*/hQA ',Jwλl܊6n6!?7n&Sh&ePWWH&‡d0ڹ!FsHf}wr.6>$1]8K]Ӂt` DE?$ጨDrC4P )Y.m-G.rд{8GCT2:OwGDC`Z L#Aq)TEd3mx( +y!E~b*o ªm0Ts'#P5 XBTSc.\Ʊ&k{Iy{` //270[~wojF4OrmqQŝ21_kCv}_CZpN)[B{,=< ۿ'{kmfe/Eѳ2p ^Α-Wi~]^I+r/MZyp.B=j<\QB=*#7bǀ ?>Avn,.泙%,(멯^.R{$GQEeڭAi?c7`j19Ɣe vQ LB?Mgq4}U񇌋kEn2|3Uqh.-/.?G$\}i\]x9LMrxo@<;ME2BJ)`HRo$ IBDʤ ^NP_MFfKW7Ī34"hg1Yai y}ȭŊ \8D@g3GRYS}C,qx}x*0}wP~a)7<#as3sj d]{ڭ>,DFZFIPqއm>2LRt> o"JÞ(D8źS{( ʶvJi.P b$f" ),)(2Ɉ"8Nb<1jigW8(W Td 0)cSDEI*d*T&i8UuJ)A|Z9VkN&-OSX\q/:WT,\í&T!:^#TwW맃tU}FMe" yEEo4[L~ ?|\ U$iX`ٚVy3#&{Q)Dw>;)\JVK8H:`HTgPǁa-raoq Vrl W%|~s]m|[^W* !?>6uq\0#W WQUoH-D= \O7BIFFFe_A2B&$SL,S1d8"0ŌX(bT Ը_$(GCm*1%VĂ]bRf |.ںSۅV'iOuM.fp nB"828Q%.Q)Rc3@Q`&@ &UFDD3c0|<+uxgYJmm ׫¯p>w$W$ `_B*c/;z601ǘ6qL x2!bG"#,0rb lnV/Vz4$_6F|Oz8L6!507./k|ĿiƠaec7b$ʕM31++ź,iն}Izj7lB͍ds9+]@jG+u/2ƙDϳ)W*12QTB_B*+Z7:!4c5ۃ>[%S%i PtSNv:ɿ }{3ǻǻ8ZE:^ǩO=P$:Y)E!a"[igxVpTz/0+ ԧSKvz .E1vV>ЮIj^Vw XY!ͩ"}dG|Hw* {䥚=pA*S}ӿO-3@j3aRъQ&M=΄MBY >, > Thm[{Ayuvf P-87dK+7$DV՝n[ꊧ[z4xUgO:g=!'1jHw<5fi)]ا'$ЁQP(h!s s\`Z.H4VA>Ym#\$ѳ] ߍtwN1V9qZ̵O5fޥ9@!cSu3 SseKWR].Rs"KLV߫"iو[^oz2^,&랖 W|eւ{<=?-c) GVp*w겵֖A':~$ wi=%M8>u:}Q9Yc7pQ6eX_zyMQ֋^梽5arUt :ۛ-Uo5#]Wę{*7 +ͿCGuPң1"5k\ᢡna\%|m{}nSr#Auq4Ѵ qq]P\Ȕ]=]ws]?gc/r=r=v9B3vC|C}%vu'rE1Z  qоDګa1Lev\sXq,!J4$0V a!Jp$]+o-%lFTܛ ̀(H3ZRgZo;nAJ{shsv"(낺BsYCFfyHvv CѝL\um۞19-Z!6XL[9lxnVuRcn[Vʱ_ޔo4X d7t:oH6'qDƬ'&EaЫt:EP Ppp ^-R3LyLQAGڽ?%Hc;m8H{ wG!òaU(h}Iҋ2G$ǽˤ> c礲|ܢAJ%D^l?Fi!OZ|ݺ7懩u\Yb:I0^͵DSP|+fblqyXA2BEI)Jq&ie |i%(R#RGVbe|4\|EN7{*05JVcjnZT>^-1n>Ik~3#]?ܖbp6EE1™R.wٳ䗔![* JYl6\eQ@3@HcT sd*&"J$m/I"Eq=r ϒ)ƙJEq $#OIń`@闌E2FPdHRjYmpU"^_"/δZ\_tlcqehq>|3aj4;ټg?.#u*īyO N@k'Fr` W|yNWA:2y 2~Ч#I]-d8x~iR,3bŇ3emE9bb5phAPZhsWc+|/瑩}Qd t~:HWũ@7_|@rM7 Ayuy뗉^nQ?  _@˛ myi tg߯Sޯ/0gbal`4@KN ϟ "RPEag6V|PS[k\PQc+m#%3!9(QX8b P,Y8Jqcm,l_=7/3@nR y*#ID'i0uB؎ #)2.u/;҇-%U3 4 |S-Fa~%w]SF~_R^2+ʷ,3m<ћen ?d :0WK66'jNLw;ey921 DWCqjuXoЪ`"f{;4Xb!#lj{w$s93×)iyL%sǶatjra7)|vϢV23>M u;=ěIuB23-ssʯ18*F,lw>ōUpлP1ks :YW|g&_r X^Jȯ'I~GTU3i>R?'7T]a_|' X*_GWcڋˋT (y6E̷X t}7vD%O:\#fǎa1Hn"QY>qFDeW+Tx`{CA[ٵGBzcGKR'y=5ͩrٚ ,D@78ԝଥ'ߏyx<(x?]}<C74I+d$<6ZEsPoS1n' "HsKkHTʐ}]Uz(qhًC+aV0h0'U?:83W<#J=S۞`<غܙAZT[%joQ~`^ӭY=%Pw}Q2r޿47kGF8L,͇'zP~-S >:u4KHwqۺmRtM6:l%{WE6L`o ea!{ AF~ՄȄTNw0qJʁj\*CA>e45[j7QMoׁ V(5W/Eo fZ"߼V-h|z׋{Z*@*=+<%+4x@(3gel0%ؼ@ǵ#JѡN,RnGN6$Pd.(%$ KvY/qӉb@eI:f61F_#z1+5+*?ߕ-ͪ0%߼X[~ӢjZ2)/w97J01HPFk,e }67~[/.7(EbV.[x`P e2mR #];lhg"f( J<֠в#p-,4xw37eMT4Ԫ|8݀<|rP*A{A5p\畞M6ļr_S'0oVX>ac? ވ` [`XC;H`J@$}5B$9C4JP\N\`5xGX 'dRC32mSK+Ojm @&B\}IGf  񒓡 "5Bn4Ν0KW ƥԐ5ZL'z~Os ^jnsL?W|>t=kH7iѴ wj'$=e޵q+E[#?IZ\ܞiΧ^$ږ+JW/W5FHp~pșAો( e3F7VWCY g(ܨEzt3Gyh pB^LJ'c9cd5E(J/IYqT)O;(S/G(~; |s(ꑍ{˿sKxPsO+S=ܾ?<;9ƮE9OXI^X~vaʬNil1靖5)f[]q3TY~c/d~ɏcf(ygls~ZL_&{ZD[S,bU+n _Uj|Sϕzj^’ 5W{Gq?-ԫ(82.ծw Fb*%GFոWsdzY͠NBҥ3&HI׻kIfAdN-Vrx"Εc4ԵZWF ʁUɛ⽶Lu &'8rc+[̍&#9ɑ5JknO!ЎrN%k/v#Tp"(cldb~`@0&"8OȜ{i1mG9`ҍTm~6>n=ar-rNl֙jNER dX4(eA)uRCAZ* pAT5a=DQqΣbp<9ǜl!<>we:쮌y|קVRT0bj0@cf9dL(6Azǭ+TTh7\SJznnmH7.I27ڍQhTĈNjuۀ)(O%4V!!߸nTdrR{{5ȥcͥ%ȯ|m'eaP!` p xB|L=vxyK nHQ#+1eZ*Fppԉ: 1qWj% }7_ֽ~#$.X{qyR D~*ի~9<iyWbj'2tS3-ew|?mZ,:uUsfpt ㎦Je $ZodNQl4RV9 *ʭf!66'4o=5rޓTFkarȹ%XB]U)wXȝ n07$jab SdJIɈ(#<^ &@ij1 E$BeA6\MD2T3mCd[CnFơBx[I٘_cQÖľ^'U?x駻B`Keh67$R~J] a0ުjeU hJ`YS֗ؒ@T`E?]H$ 'mBqEHZy@lP$̒V1rLIưh ʫUHBy9RR<**=ב1D:5ߊt |Zd\VDZf2}<)?hw5fހum{1`gRhQ>*7Φ^EF5>,5υ>X8o]1gqʚe{TDQ/3վF }B0eͮFߒK3:ґVXC֎sOkEk̹ 9 M0-7(J.1b(Y&48CBuV5f8aX73?1V]xVt1^f.gt9^@8Tn6:U3ʎ/z6*{Zʏ|D^ΧKn#?4u뱠 g2#!8%U(_ԯFL%z49à7Yg}/ P Ai}Ѕ@AѭwVН#&?+QW^NLF'KÓݶWϟ(BC®]M7mGBBNABϹGܮ[CY*W\nK_LwͶ/ RO!wvpu#wÙ.UMr$xN';61DwR""{$RA"K:'{sD'ֹKV:2ZeL{nP;jAϙPKl?Xt׻|dw$p{c}?eoȅj"~1@AXTc?FXЇDЇf Ar_}SX.ر\Lv!?J%- Qoddt* UGVy4|xBv#QCsw,p0Cn1,, F\ޑC@wwy `j1 x!~px)l<]}ǿ~n4kzGZJ@!ߺw`\9(qBp0EFcԔ)MqHOUtqmMz),@-"c5XxM.U?ڙQLu*oA.*'10)m.HB$[msLy@L@)&$2VdHm:CP6z nS 1c(ʃ:})<䢐 [)$shsY1o6= V)U4yw c'^PkrJ~+j*qtD)yzX7N)\S&;-j,Hx)M<)M:%4i=Y蝲t'Sѣw7)/I[Uz2Lz5 WNHң-s)sZ[ANr:$O]R!<_[MPAɓ#eD`$uĸvd6NpS{~Qൗͯ[)Mbj!og7/2nny濅Y3?gnԢq^'يy<V @ `*'%aJٻ6$W,vg0Y vghgBƺ|FTI+.Kj-Xq_}9knQ %ĚZ2Yfo:Z]J m^4_/?٪*|!RЇgnƾG<ya KHpx|/\uI|5Ww}Mےrg:_̎ 9a'h5\bE3U:R[ҘYiGrӓ,o4r/a*"G5HK0  QVBkB߭|EQ{}c}[ӗX& &{iE7TNAUk9RM s#13g^pITyE19d.&@Ƣ&0j-gib1Y(aNtLx 8m|Kyv WDE%t(j|P #*^oF8\sCaJfGsgN \f/ 7eSEy`YiTeA,E$oDRk`g$)s@|Nmi"/w$\'qOot1mpjLO9Z϶e|z[wyh`>5:yq!p: "8,Vs3N3cwTIIw!k[z0Pi#lu6&\s[ 猼3?V%y>$'y&G6f}G-FlJ}8l#,) Q d_-oi{FNe&@uW! TfՏ(g3փ( HJ*k.h.9 3榈̝?\GN5]ʔ>XIEpo\/ 4n0Vy AG9]~ "'mP!k ^^{~^1JkEn8|um^اM )5H֔hP@E*&NWhPu&#3Ƶ󎚣88%,Wլ)`yRƓeFpv Oylwʬո?mvfi[Bp[o0)FFmso'fe]7lyx_ۡפ⇛6߷nkn~zp)FGmwHT¦ ~de#Jf%_ LawiL9'rKjH67a:E|"qubη ~k1r.d44wؐ9 3gHL`ݥN ^i)/G0e=a3yL7NiXmJs'ҽ˯+#k]n}iչ5xͧnLpy[ApnZv{{]mHJDູ=BZJdlkMs |={d{X+`;R~X66eU{Ƹ gLz=ƽڟY0{g5s2gsηkl{l'pYS |ΐ'DLق,3ƔA ygAǞ1 )Hf)j3L-#>eo:eϘG}I<䰶8W56xBW \8u~JV^Mt1RHKkwx9o@}}D)0u+;~zp)}al8ߵeKt({#Hr5$~S-UH4\1DDx8 'F'Qm$+=ETE5xP u,{k9/EO2h_๯V]UZ{ 9?<1@)8J&|H*#`mT\=8pNu8EF2M5lR,aѢ,=E,"-~|zb' ?< ãA$M#apD^`Kvf(P,52q&qi!GLLTas#o[Gfp+j <)`bb6Z7yZGmG!1xdN#lͱ]u̎X.G؃!bŵ)\Q6:Goz$sFK}2Ryz)ds zOHBX>ͯ(%`B}2A1d[}2o춽ګA{rKf E4Lq)o;HL,:'mbz~?YG?R u1n˥L/a]A=|=Y,'2 m ەȧbqoϷ kAޥ_Us؃?cmhkYR YeMu!O\EtViqmtF#[)rTt&6bl8CbBsѭ y*ZS)Gi|S ;﫲޾D{.%^{uTd{S'PjbuG̼> &Y)s,V>w4ˏ?= }*}xժN #> ]펉K9ox'> dk>'y>U&V׊= ٴu7{aY˪tXVò~Xsi#ħgI%,w HSN H # Gy=1Q^/l 5fꠜ%tPfmNQMq ln5^jCX^X~@֥9g-}Yˍ:=/x}3CO;fӦiw.6O5$D4F~{v?봇ywx9ҚW'A!{K͛?M 2JZ+m.jI& )i2'w20:7_Li) Oj%y •7Q5?{lun$Ek;Iw]0U$Q?`l%1u<@8 5 _AZ޶*̝VmZ%&v@t c-K+Enh@JQFz 9m% mWoޗP5UiN¶H(thSUzuҡMQwE{ G"mp&c.AؘF RJ51I&^a1ϫR?kbn(]Z4Ӂ֋&X$Z!][]*Rʊi)ݑ409 :͸>Bvi#0zdP@xZ\rV$C׋œ{{:) ٰN}1kX'Hgfv.wO#q{XZx& TTqJ'xLÅ'|=Ã]H.W_[0\Q!kmvd8S-w/N+?-5$7%%C*J/i}@au圵 T]%zO^ gW)bǢQ^Hw9Up`RR 82?H@+ hAXBC:ˮx҆'= Z'kf0Q!}@j' |I(' <d7b`b3W ^!-&ZM~w/$3ƒbtO-2毧,A㱈kehRHݭ z޸7Ƶ>{Z^~><պc?S)p:³#\"$%<Й$z~wY^a30(N d\Ƕm$Q˙yr?ILD2qmu#`.)cSؒvcó뗱u  4 lte}GUfE6@ G4*ҁB(̅^1_TJ[={yҳIϞ7lS:6z`2s&1m)3peM@k\:P4.C4S4Z3|/ДUw݊I6,Hʜ,㨝f ij\Jv { /0a uaNRGgEiٴ:iʂafb0@ARR"ArZf7[UۀpDNPK2ŠD;o& M4Xb%TN"AGvuIX" %y>Xv8F6GֺK׆RB?-؜A,=%oAts5,3.xmwKK &*O2sI0~8vƆOs:[ /ߜbю1^=(XŜ$!#ȫs 0B or h::چ"p5a0I J;ܠWrhҩyDoMQc)# F"ec&yo*u##@.[K*7d2˜$~n$HAR;wJq^zVhxF"bw$紂@svۦ,)FkZJ d"՘V@VGo_2n Ɔ*n$2/ҽYjԈ*i[( b1_` )OT"LF"I$Axd :"ưS HמEPQw)ֻ:~zb47ňfbHC8zŗrf ,уB7`Ӗ "W@0d+AK} Xܜk<]m*Ai`m|#zzͬg _antFaŅHczTagOCGW軺Q!\5 /??QԊ/1D/[SWb&?|ٟCS`t2/+ePsIP FL^7Y^uB3T,"U.C߹v;!\ͽ+)lHnͼMXgIasUEC29((KD*Z!-ҙ^La 98="M`"Iic50?wI.ʠ*Ј V Ml| 7iY Fsb!N;acHڈbe@L0` f`}t\(h1Jk,!-T ˞=Gt6KS̮KDT`,| vKNI/A|ΙK[y/|d7>bP_| vwDᭀƦ/3%  s(@՚zv{e, Nh  p6-"[L6mTh3H`Lrmš`2%0*X9z 4 gLAط]x$k@4 DSު, G2[LDP@a2[&&rXO7!i@~w &">ߛݽKԊ+$%y1xKɓWɶI8M %O,4[$O0P'̷mf\W0h ZbL±zgmnL^HjZԋj+8=kף~FSi Q89`_0>*TnF16rꄒ1 ^-|S-eDՐh|r`S w!l‚#]B@.ZZ5KbB@AbQcyj7< BA?Zw:=>Ck;̥bt^юx9h%ޛ0J%e#JT #dbYuڧxz+)j:@?}[OYjXذXhMsgȐr&K+s%gޡ\j5;go 2>;d}vQ |+Is( ||Go+mAVxz(>ԖS"IqZ{NR POR㱓VIǞORq~Ijgm쾴:I->ޠZfR-o$9+$>QNJHBF9ʔvQf=FpZl o<H %J]b4OGT?~͆ZR\Oyd,Tʢ+?.faS]Kܜ``҈ A 0io[=[_[K;lZgTo:c6_FM泫H3o+A#MeE*xgPPJ\bQzh:/'_Vϣ?d{gn E7+_tuʹcMI^`r?Oc=>#\:|UTo֝źmQ=kx<_T,jV[>ǻGy,L74v~0P>= YI[X` CTHOL9W3]%I8;jhg:bA]G|fPK }bJmm;{* ұ;[fY7,:n'ZJGߺ=C_ayD B'{E[!r^0Bׂp}ʥmIzmfR,"cR&0ÄF{FQ+ᨇ9 6]D6/x !k@,Ք!j l\:Cr9e 5ex2jnPZ"]mdB@'R݈v#OFww~;9}WaR)6 Kc(iWK1Xo$'”é$Dh5L0uQM?M-{GglKpks5rxYj^$)'&yK? q3`F溮Ns|5}wNܝ;+7sfpUmۂIU՝VUKyy |r֖3bJQ+ڠH-(^{!7Ls-+΢DH9E"aPq( HjxgoԖ݆zL-C'S 5>@h뫞r驽w4nPykȟ喅-q]Jorj8m4$?K2`~,)fg51 sZpޖr2F38N/K쥧YqJ~c*̌k0fa@bA nɦZa-{oo 6pi]/c',_ב/2iq` dGynODdo읭Iyg:JׂzuFgwSmcNmϼ֯E`jL9&ʻb g :I( T0d:`f2l[ڪ9| nRU#x;9l㑽-e9MXd)7čfd.XҾȜcKoU/y}eݝ`$d>iZwG:P Vpojfwh^ 0ukXZNW55%ƉQZ*\3~ $f+v0 @d>ܳL`{-b-zZN1jSJ6!ZGfе^оzz6i,fJ{K&5ONOvQ'ŋٻ6$;Aݽ`{rOtHJv6~5$%(>f3Cd~UB^I쟦LNF^u՞k7׷{&3e6|3}vG 3_rA<w2~)`mA|% <1e HXŚu!0$8ć_e3w=[cou9ѱN8]9+by+qN Q"$ )4@#f%Ł1[:V`4`k$˽OA\sL F-Ay80^ jf봺._}np-,"RX"cATWŁ1RoȨ81V-pP c1(H^qFQkj- s_6'F0Q\vG%"zzQ%#!uCQ&QC=oxȠN,q% &@m?瓟'qַ{ljjnԇhnWoևߝ=M1ecSiwr?O{ XJ!I@Vmx&k)pkar"cu~P7z)Moqۃ[LQ=s&)iG+CʄF b']t40P[z򶌜[=Ϲ:6Ɣ!X?mcV;sS>`KSF+*Z0zI@=N!r$ !WgCdBNajyd5T|0Uwlk@(8~ v~`Q/ُpf 1KRv#֘6a?B*y_c{vOSuH`?&u%{3FeC3xI2;q-P=< y3S D1sJ2*Ic|!.0y}}2 EbTMS fYQ){onL$!YUrI2:=g2 YYO#T0C)3ʈ=F4x&SCr@ܠ0 CY@5fra#T5JKvBvդBdA+[˵jT-BJHUYLIkRXs_^"u cN״ 61Mϟ?BR}Y~8biݲ͸[) `XB] U1/?Uz T%R^+)$]liESY EӡaROʵ-^~tQer>n6=]!Uvd(":fQNmfyeְLIeǔ_cMM;V}gJF`:ylv:U|8=iGoZ}4g U=X,ۦv~TtU3M.-r)yo/2)-%jSgL dG[W%1cMҵ);P1]_4Ex˷ŲݶS:4?4;}_2M&&8pQKʦK ʷ_^~t=Vm(Ec\,'0{iS53V9n{-Uļ(+랤PdO*k]zR9D9w6rVc?\_4jt^<.kg}})bɡm(wn< ܋Q3j..AS)R ߲NKB3A?9;`]SZXLW@zV_4Rg1Om"([\ v̠TӦ{%t4#3c#;gZf1jS~nɞųVgѿ8-=/0`z:?(*M:HFJXH4RKLVH=U*8'l`9c;yDmerSw󻅃%zvROg'+3.enAl~~;/ЌTB г{D\8ۿ&Weڑug뢕Yg;R+PI3Tc`bFIÏ /ٸIÏ /2JÏ /٬JO_8~6Y~I7f/oWKzu\ޅ"zxnr[L9~ `ϺcRl #&?m#Y_vsvA*="%:NJ: ,PeOHI)Y "eem$f>R % #GMUcoyίkAו$\ ˻< mqq)C۔{t!nM+H 0!.30RLI=ʀ1Ra8Cf&  m*X3&f"M6dbuJ˸vsXbEUw(nݭ~躆%Ų&1l4?ymyQH-3Ue{t~׋A2/{ ͨ$t~H}wZAjAWi⇟~kVUVM[;S1ජ:jN5e::$N5Xx;؍h`;R35FCݩֈVۊh0A5,ΣrFŋnf5Hʟg0e ̈bj> -U_/G]M`9;YJ>}\>UͣTwu)(rw"zG5S !OTKzg̲1.wgdp>I^BAyct瓳ims btah7|C[w,): wS?/?F`@Ȕ2X~P̵ '/'_ ~EEKߐt\EtJӃ_lz7]i!ӢwAՉ}6ޭ]{n ޭ 鸊FBMQľc^d[|JwkBC:Q:7 vA1[aւAuڨ4ݢ>b4*Sȇnm!xXQwlK*Li/ yޭ 鸊F%nmP-zXQwlK>r7_Yӻ5!W( ֽBգaZn<:ƻWdG_]ӻ5!W(ջfhӻ1z-uT'x@|Y5[q)pӻ{ľcb(ȔyU5[q).yûa"7uT'xz-~eMքt\Ect"qljAY_׵7j cJAu5} 6:űmb}l_/ۨ%]Q=ZB_[مJf-)y_5k ZckB%uڑ5[}6:Ŏ PXt't>@ }-Hvf{N 0L.SKs+Ϗ&̑{9F-r̜ڢ1w" dǗcJ>s  T=Ve9NGr>9>ܨ%(/,1V}17i st|9f)s}QK\_Y=fs'r}|9f%EcsZbGY$scnUm5G19F-u|9f]7scL+s01ks}QKໞtbYsMTcs̍Z޵g9f-}17i hvt9f-g1w! :嘁{<1w! :ksZQc@YsFO}AMF+܍8Klprkf' hLh2Ez`/?R};t{鬓J,gSfN}ʸL NO3.Qݕn{(JľNo}sWY0Al>[zJs3B0U,70!PEY$FsʈR˜ GA=7iD1>ǹ%XZI&cg9"`B9D7_(0 3rf4f>|)/!4#a.>rNJy 9,%L*Wv˱)~,f˵7F S[Չ[L1Nva*Q7?=LҸ[A.T.f˯~R.ـlINGtXYݯ `7+i]ǰ <8-&ͼ[;ֽgbϋ>A[Ӓ 0(LÒOrA$Ma~{6n>SJ!7TM7I<24_ϳHˁ5AxX`݉ #"\UC>z&D:E!Jp5x˒ү{2~3€kcN'CꌿNRqC8髒|c_ _'7#S=K~(U3L͢8|(nNON0Z\y3.Bҍ8)M29y82ifS,WًQ-wwLgm@g;lfٗD<,SI Jd*2Ӄ3UJG&'"g?_e$3؇hd!.:Ɲ.&Q|nD@CbR=v3`/|5|TB݇44_zR}6Bͤy185CN4 F`-ҰU'B&wZnƏ?8S pcdxRf)xOg[L O5%ԃ?Vߖc?B֔~?g#[Lòk)+_m[?o E@p`h{%~w@Q;v!Z-H xLZdfN˔gPK2`b-TBIЃP.97xK13+g)Z^JEDSE@+Ln]TdZ!*AP"ӫ3X(W.fH77ٿ_܌ S{%0vK9pARd8E&5JL?1$+b4.=8Mf;N,&P0L~˪<? ub3{޻_zq$Zk1,U@TMo!XJQ- %lzLҧG788Xפn6^ C1+8cY3|Vn޹"7@ f1uQ1£8% z~d>}p_S7p3d*/u"Pk.UiPy {H `&fd/IՓ3E!jW}s7lwXQL^kވE{ݳ$%z%IxjR/{Q]9B/$8;z&=lXzݧ5,R|k$"BH%*tt#XqĿ(7*iVpC s<\ƴD nH:7f"R* X Q 8Q/%yg,?8No`C=DY*#6Ô\+3Bc 2\X{ vٸ&+xd̻+S'WنچRN*gvLי"̙ߎ냵, bR K!NSŝMgD;m;<`:/yVqGAYpY釐H/_ ӏ^897*F~ްX*=n`I(#{ >Tz jﰶӗk{}zuҺu¿l\STjӮ[h B%:Fu ^AɖnW1^!tR˖>axf%gH; y42heU#W4%\K#Le[3SrvӰ\XlGI4Ct$Ch ]@b ^vgTm<pΑO7f%LQYv=!?HV3=Ĕh-aLDZ<6%8JIFZz-B GaN[ts2Ċ#ڦEb}޵6n+s \9|m.K 6&qj;8E;X嘲v^,7 9Ñ~b4i-% ebxE @Im{w8W[BMqX,p*(yt%2lD/\ּYT|@j.thݺ]:x*n^ҽ .T8WfG^u|~k#:6.qHd?5L.@D{AyA!%_%GvU\R=X8dxdf*1ϼy^qdˍ -iuE=h&CN 2FvOrQ#-4JCDqIcuiQx=($MSS*ۙOW]'7s0hb XCaDӼy8"!'q$Td !9bSSp`S]$M3cS4-ϊvqNשE9PyP)dX& 8T2Uv 6Bv O~7;M7W_uUX%GqL4ΠC&R+X H2YfYQkL*%Rx4t(+Sf6~譶,E/X7:!G8߶5.wpUcYV]1߭^;:|DP.j\.ЮFK0 #L/ᬘh^d$>Oo,v 7\I)V@GLEZ`Kȱ~'r pCwCw<~iaY@Ta$53hTFa\"\a'aF25b]CZ[`/paxw.VRy 0wS XBt[ѼZ(5 QE.~9< u3tK{wcU(7]˟24c;ʿ &Ӈ~].%P3 rx1^$ˡ/$b9vY[ =҆hU.Oo,BIeIHnY:RY8'm!Sɚf< RkŒep!S$?SL-3ceeJ4bJ8R鏭=jk;Д>rdHт-z>tܻaW6rؿ$3л.\r=|P" .{U]7b<'jrBj LёcbeMDDdiWL10XLsvJ(A2Z% mH0DgqD8aю1L/}Ta c,-ud6Ɨ_03v}p_>v傳2H(v<|6u;[_̍:T!q'ʃ'/vQ.R?BL;fbo! !j/=r;خfFͬ\ F((_ZzpG|$nhpyŰTK+%?gdw;^/J+4AD" +_[ԚνiwC/&25QAP#~C,Gް3". -#~v-\#sv)!ns><;\11w6^>0J9ՌӐRRyu= ۮV\=]烔pSܹG">djV:;?t)?SAq>߄fw_̀C,[6ڒ-ݧEcZ?뼆;Q\tMVx%W5"6˥x!-Gh+DR%+\L*$8fDnUL&PͥY+fC^@hw5BdT7^'p"Zlq!({e~TZgXe"JEWp^Jםx(KV3ďh{ęx>[{->A|1g3?:E5\/E8ҫ2Wƣݨ*峦ϔ%[j; 3O 3$Z&YBt `=8XX^:^ r+fh?9a f r&U83F%Jz;E{"IOAgׯWTWc:V[gCr 9kUk;g6w ~Q9u N::8uy\8ra:;nrG2q[x29tJό(;@rL/"Y6=̤v#hi S,.Il$QdgB#2Cѱ7F{ct\4FV\&L4sU3"PqsK)ęRZKMxp0DӁC4pA6L3iqIj,EN*c/ŕPgJhLN҄g/Ŧ5ZM?X9PHwa;6B5*BǤ!7qwР@cH)Iu`%F ,N(UrX%Q1̺ eHn T5ɦKX1{ MCC4ǴNIp%TRnhMY*-IUhUn2p4CP1x6 A6qX=>}C{e;X&7M9[>=Xb8SBO߿]Q }z]Y=6,<7z!mwÊCZ-C?b*]m<{s3d ۏ''\qHt1@)5\o߹Dwղ@)b#A 31Wq3Q.Fg3+)Y2%O0j!-M4KфZuBүfwhjj=jHM>('*  ,[d-$jM!)%Fw $J7A5:!r vCr!Y9΀veJCp,Xc z!e}Ȉ<\sd,׌E w1b9_cNAЈJ4 ŒB6 z2Kɜn\4ւuPŠC+z 1 ,ͩPPiAhAu$hʍJ:$՗'lnrK@#\B#D0XQcKR s\j-Y&?dqjKef,@5ݍm2f Gʧ|KA .eM(je,FP.ObmeYC-S Fde,VSPJq`4#nKK3+r*XW&c͠ZtsB|Mo_z%_1Z!׌xaύЍтv*''\}U? )3'l> &RņU?lLBaل{Y߹S&>o]b|=움2 ?}3KF#4)ubփU@s*;&..Ъč4T8NF#5QƉtrgӔ۔$.34 #_En'C_ORwA:Z7>Bj>›8UBOF3Q8gz! ݊yO:ʅ( sfzyA_ e)V6c@v< i)$,3 Mb͕(ͨaHFH+4˚Yb {mބjE _J$2E*DŽj!_.d=͠ZvrA8?SBYgPo5).<9c4;rH "CܤHU9I|3t1O.U< vl7we"*i &v1UIofKw2H+}I}j ۲wmٷMQhUN<# ׆Oj2*msT5Gڃ{WqJ* Sddэȍ *zfO1B^[ N0sr7^o7kfPJrcN[p?8=v opt),rCNqX@{tcF2IѢ!kQBtW?uJ6|; oY~^ ^^ߺYz|LE|pL.#anЋ8%ܗy@-6)b plhJH20DR,l@ˤI`?e~>7) B2]mo6+Ft.˞nE${k;E%7Y*e'QԱM  ֗O>ӱgte.kKP@2ō߫[sȇ[?l65wNq#~l ɞ,Tfzm3ٱc,6T_M9'>;ka]~(33IOL>牾i]N>~'G_Y1b\>M$|*}qQI&M6\3V ].v@b'}'}X,N_ބa]MփWvء۫]  Ot)_"E>zEBfϽtD*=v;Zv #8zz<|AmW_H  C+)!\L2!+E9EĘd,GD@T jC$Ku&-+F,-֢oJ5?ۘ64Aszp(L eqGTsr L#|42@6e6 d6 P)/m7b )) eV0(V"%*bL2QdGVv=(fVvcWr Ȏ%l+fh|ʜdYF-2 ݤ$[V̥!5:,,< [`)ʇJ!YjBD3|2Zd']NMQ*eF$CK?A MlgeDЇ%J'tvD]%=8m!OFKh4fCQpb[ovWԶ\ xa_ջ]oӏD\ujDc27 i=VӤWs}e!|:rw?_lO;of(ǬZM$F/s7ƇQí2q̾|r7O}^h.o'0pW=ye)9LE{w7AJ0ӡ#@LFWt2_utp#Sz 7l/H!p;- WHb>llOd7 | i+ iS,ɗOņbz?ocΑx kq1&" Odih_5mJT5nԭY/IMƾnbI:M #YChx .PP xL}BخVbAxn&anu{H 3^RH@4f "a֫ͨ<}fJ`E$e10#zg$"\0R_AQ* t,LdDVuSz0D֕ hb.QmhhI0*Pw(=cY`k\F (hO6ËQ D9~ٞͱ҅a:.uE{'^ty~XAhP,kp6v*j2llh_h_>ܢTeAi& GZ)@P Tq S6D?*ϋtz_J[wMt1fS߰Osͼx`~cJ< Fß-BrkC- 3ҿsC ^i-orKRSZ-v4 MMf5YrQjj]-,6[^G}!mܪ健9J@X^bm@[m4 S7Ksƈ*k06 jkrX !l3J"gdeg3hf3I2wgNUkGa`;&W.zq>.9??dZ4Z~z>t*V#C(pH߿$߮?Ŕ$Ί!;=5ɻ3'NpR>]r?̙PJ,LJؙ}q^eNsr on u~*Rgq:,p0T)ipNUJa~6_r;7֞ h rҊvC; pCK`(*8+FGYb wVqѤ)3&nf1s6 EMݏJ5VvmLrUJLa=)5ۭ2﻾_iA1-$DWZa]yáJS'\i2u0eGJË WWCIJVOi o . $F(ċ->7"[Z]E5gCJQGӅzdY7KtOs`cRy eHh>2FJQq0e,%I-2]P+ kG%w28;L߫EDPTr+G_>CX:\!Bp|j z1vۦnetw|zėZRfh@M@#B-Pˈ`Yj`3g8ʻ$_TINs u?ݬ,w^tT6?gÀ|0mCglQ_r.l~Bzzѵ\$!o\Dɔb'ow[SNgnGOxL5ڐ7.d8w,E֎ݚ Dt>v;.Rڭ@ֆq}L)-Eh}V9ؤ^z&4\~pώj^wgx/}C A峫灓C3r1yb1e䣚UHmșyw:"Z4s3Us֖٬gnܚٺ&{uknwZ=b:n#Ma@萘/> `>n 8$ϟHJW O#P(?&(L# 9_P .J#w5+ <-Y ҬTi֩R>րl)1 ^ǵء ʥ7;'Xr6{vt}: U*M2V8W,YUi@,"-WȖ"B\UI݊OoB3ʀ.V6 Vӫ+*7 u6U%4~\df:BE*NէImv;.c7 hܒX`oٚE &`U,6O sNΥƆ0Jm2EyJɸW)K[._)7)dýZr$e-w賻 G)<3#fJR (3E/N%Onr#eNt+rI1:&{_p,2aZ czL~n^,Y\>+ q?"JjmFw׈5yJJ$?'$)+x%H<3>J4P|1 <^40u l.@g3KeLߟ-UGÁg#~|cd4<@8ƭ?eQ# uoc #2t$wj}ķ1-菹h_qflYh_k-SOf^$/9\hv&$f?vgezbz-,E;U3X?"Ql* FQtg AQoALoGso|5`C&\I!%RQ2˪ߍɍuu8\Y]5BR K}骐v.qUK;nx-c,G8K mOۇMnG[$c22P}إtscet@qL4^ ݔtXGѷqgzS jKBPo1C]zKv ~ei(]a-|)r0UdJZێTukڈrݸ+e]D_k2oU5dHkK򶣌HxJka~ٌvPhWc}};`a!Oos"5_kI(>Te:L`@0&P~ɧ! !*cSlzSec[7pel97\_]]ƣܬ_&7r%W^,^yY*r|:Oc@IMC,9Vq(`$ySln9fh"nS6C bSTt*q>h;C;ŋ7."S1ҍlAf\>$p , R 1A)11.o]{.x+>1OED $x*5U j|Ҵ,33aR7 D8t8Wf*M 9*:)uU˔3/gژ3awm~98k8q[9aCjrj0#LTB98&b<f"$r*E!xccn0{|`MA$x;= 9#/Z&+ d_&Q9쉠Q(K,:@(eJ)+wZO? <8 =DbAP0qGFruei9/N$V.-7G[}!};{QfFh~Q|pW\$=vGy2/dz~GGkk]2N?r7)=_&inZƴ,m(4E1oa0&#-c+ǎKFzF(BN&y$d6*:dk2ϵ!) +"c[aMUtV!TœNZ`z stԫq(9Y"di2h2/dδ:CK^"@>&=JծH&*(z+QB ~Edrn-75m+'+KZ$>-L pd:O3"H9 *\~H0i4'::"'W4ϓ<ܞ>Ma~7Kz! < ӏ{x7~jȥ[bg!{4?jdc#eww -kny99QbI}f5I/VkBœeQ.>lRi4H6@(_ZJY)'/dOͳQc&KXe26KK&M$CmXO\@K(&'}ck8}U& 7:$X2l%T[2W!:(v/,D"5(T2eoɛ녦Ljnwe L65uǻL9ۈ zR6,#:8 nǍ-[gJ\|捖pd0Rdpj2c8 dY TԘЖ\H1Ř90A/>O2~%42e3D-3%bϘlQ&$!Y$U$X+vf .WKK;-G|<"C'#@UE0GHN'!ӮhK`'Ť 0|m<Q0El̳$ÐĨ RFΜ5&"iMIxB Ut2C0C+^!f,)#L8"s& Ў#i/iE}Dpl fW'G߈%EMҧSQbZtD^U\'$zz݈odXlO?M1>Jftb^5[2pMy7Moa߆,J0fQpVed+3cŠ*|b֛$H/ ^Xؖ͝ۯ"9h򀷥|]_nK;5ؑ^(H`L=קlּ OKkh ԁf Rje1m̼ ~\Z&+4cJ\d@YN*4y hfiyΓc|<,kɋ0k̇t3po4Z?tLRhp VCB4mbpj!ӡےl7I@VKn/[dr¢ f'^:Y[LWUNWGbaM>h )2d1Y]/y >b/v>C_5Z}bV99ب[#d@+ a;u,[Nv˶u{rXپ־h+mw| m5uAZ2|s_,Y݃FWg?{4Xf8װ,sZd϶t I'Ip[yHjDN7:G9!9ƽAW }CD8u~pQCGU?bGYƙ|r6K%Peq_wN'h3&'ݩvI?_O>soMu7AcU]OQ:p2؁cPLhk^Nw[Q:-tZD鴈i[ ^ҤH-w[ T<@JkDJnbA)4|_yuw/WS17I1\\)K3&AOMJIoҭ}n23HCg%|VsͤPV h#gBDd/g!JP8_:%<;A_=IUA@H6D .":IEe U*BK梂{"laVHwSpEV4Y;:+B'нVj߰Y:\g_JkE];9b=/I_BIŽ,!`B HzJ~"zyr,yK^tc2&d>_րVTgź:"" eTg%jC 4%"3Bզ:/h)cu8o/4(;:ǴLrfz4d,=0-%{3HevLJxPo%Ffor"4`3]xВC&ԌhăH}nn#{̒o͇!H|3OJu+U<dzTh1ڷOw~COnOQR,F?13s)+1-R^i3Y~H'GO?SYXH2rvӗ|h Ձ^C{#_J@k^畣nֻ^2d_a1 Yyow>7.||(Nf]Oȩ|򫛝O'EUN^]|eY.շXcInzǽ9R+=@̧%Tۦ/w>앇},e*X Lf7(>y;gAhz"J HαԎL*p퍀 L׌qS) ~洕)8 됭u^'r0'2|vӃ@dbenE, }? 'g r^/g_ToU, ~%}%:h]HYj;DLFQ{ խbW/_Kõ m묳O@[ɰnr.E5R&q/ҋCLP,d'@  RWuU\{o3h⼖?WB߱scɘTPRMC,c'ok`DmYFţCSzeTw@qHuoWYW [9 )qGHD%p9G/Ȝ4$s6b*+$хE$4cOZ'l _e5Qk-w.}N-W]~ߺ#@ C/PK|@ɡ;lڣ ^"藽~O5[s 쯝齓p4vNE c-Ce]ռuGl>ju p4`\:҈*Fs~E|\[ctnQi~/u^h;;ەiHкP$h^<|yOA=o*g]u(å=e=F[bA^h}c={qj:b"g.@đYWf#&4* r'Vf/NJ'qߨ90557|VQqTn_Fo*/ucuX[W ӦYF#be) AMw.b$Ju& eJknzNǷjD*ZQy=nw-Z^x3{Vޘ,obL XkXWb>|l螪Rb}Q~Wg]v,^FOe)J~g߆gqsÿ@jy9w*D!"QjA]DD11#;3g’HiLMl3J(!fumA )8uE ѵr+M/FBHI}2٪%r#h!g։E S:۫$cdOK 1n5(tWp(UaN"4"b'6_V~"+5A5 k[5cE$Bc5 17eHjLHoF[g #_mLXe}xgmfs)M9L82DԀՍ5amTj0(1^JՀἱxͣ4:%oKrdw^4#C ]wqK 1AC8 5Lp ̡)xceZS助 Y#0T8R\2ؖ#[2b>ETF-~&F4 1SmD Ai$Wަ!aމw Y{)bn5bDK ;2.caR: uMTBlׯp=oJ~6@qJo} 3.E{+}a~u>,T_Ўxw})һ mFkT!XX!D Tw/G4,:68Mxr0,sݽ ^;E#0p9X߈)DNH_-fd*w_:x)UCUhB#翵US:Ջh}G& az&}M8M.y 2;W(C>Q}l%jhLvRBlNZuv~B6yq釪9b&{u(r*{{յfb5I\|C8Gy&Ã5dJa}I א5nȴ,^FO R}罺u?o0qϣ[Ҟƶ,6YO.B"ڬ^7Om.\~1ny4.ulV6:>c=sr{<ٻq{sqon=ym)[L[ e~`ZuZT.2=i\EstݰM r7Ohre:Hn' qkc(l ""[;W,jX4іu? FurbNΈK[HևUNJYf(UpB^gJ s{AO) G B9=uRze^P-YIHa>g^M?3/̣h#3g.& ՆG02Ϸw:|0V4A%+ϚUO}M$% j TT{hzbh"W2 V#lݡQ~g#20=gjh#ՇC@ڃ(Q8kU~gxp&aR,Z:Vdm4JP܊J[G|I6JiyJhI!.N7A%!>2ȉ9¼Upc=z=d+պ;'4]'D% 'pf9r jʽ-\h^jq '0w*G U:]]>? /H露N$ c]#G?:*R-AM3C*0DW5 n=X)oKKcpD{)H"^ x`t(T醲Fq0Az+Ӝ-cpz tskȨkHڹ[C;ZO!U3&X% p(0j5vXQ%FmK8BQN .Sy; PY"* )kf=xeOFތyu8\4^;U+V)@\?mse˳Li9pt0~z/z^0o|A;t )\^_/#FF ms+|/WO0n+VU8Hiy+Cޘw' ɽ|+Qm}+kXZ!+%,뵥`=XS/K8$N ]3LJuu0;ppbsv)Rsh˙9!*u(g?tR(Z:rJq/`9V0UhOGz(e|ᐅ 6ܭdd݁6hQ΀ U uvKȞO~ʺkaS,**Q1jDa\{W{JqnEۧr(ߖ9]rtSSLP!KOS3sFJw[>cQuLH{+\`1C19!0`_<%US^)WTC] ^v {ZSFC(-!A1t9J#;[F9PMKqCcC@0럜pm0 S)\LgY 11_1րM r1ܫ`ٖlW %dw:iaT.3*=W*>r,+h9b'p{.G+Ů +8E.=PdF`'iӓN2vEx{|*6c2Jz2輂`bj6xUxA%9[MLQ[L!bU+MxdZ|}wPFϛzys(&ݟd`ޑD07IH2$;{?T @|<]i(WSYf&f$۹Gr 3KٹGHk B3ٶ )żƞ0pg'}ק )x oqcKm,ZUr0IV%)E,ˑ"B5hKE$EuRilKj]^Mn% ;҄+ /dAThUuē9V0GDm%b(/,/hbΒhܭxLJj<`&jٞalp*Vf1 V5 ܪASBTYxrO{*pULoKGt{> D0†ik2N(6ԞИy~VyW낢2.0)rAyTry yf-U\4m3L+4KG9xe뺧"+Xғ„X 4^uI(ZW,Ϻf]uvMֵ, >8W+lִnɋ,QRU5=VJ)1dBGxki7?Q`Ne.N?WbCk^feԘ WZ\4W h ,$FeA驰 Hlu ˚kUB̵*՜?ōRe- 6HB=tUsACӅu;j3F)NBp"]PXXfuH!2`y @[H9L`Gx;Q\PHk!SYSP%SZS4U`AxJJ>*}RE͵ͩ׭N`&ӠIʴԤ F3PY 3ctQ , V<&,RXA4H <KyUBԼ*՜!x-c'Yܳ+ȅ #ğ=A!0dSpX6H8E=g4vš("93-":Ƃ$V1VwU Us,C.pqKP-> ;zf ZZϏQ 󙛜 V:ag0J= frS5K˴

=uYB5 [`|Y5 M!" z>=yWJoZLiԒL)vp@,- 8BMaGA4@S&M-q, spR@LU^v8Ye'y50l$`|!nJ`Ăᇶ9 n04,~]k3ho=,/;s~ &:I;:}f?{?4蟴{f~kvKwvL!O]stt` ;o~I۷>\0`tv:6 ;w(LI'^99/gdҹnFּlzXHOיDIm!|̬9Bv~.ƭ7Z@<6c@  p6@FX!=a",g2]V90Tꀉ8H`ϼk`` a޼ eP؀AP,($L(@H2r"ւR*W`8h@);˞YT8g59= "̮l?6?6&yt?6R7mw24PܫwY&X|=C`E[/-:jѩUJrGRfμP\ᶀ?tyDcMgj TK832 `-3* yV%Tm5" nb eg, =pYl!!9e+b4Nd0X{A!p6KdOܩ˚HdHYhbŰs NyX͓j$V$jİު[ ]@8ERfuNB0{y]\yI)q6V`!6HA/ s$  6HpyHp7łLh3Ī&13BZ}-NAS"W7s=et’Dn[mwMniܭ6-V;aLܬBKːK+b,1f{$D;;Fi91[~I"jKW7w]ySk(>4U!([d Ywq9CaX*uaxyRL9{4Ǒmst6gjkH wSw90} !ۃnG_E/N"%32p+E&iJe'&^ 2fWWEVoo9Ҁ9߹Pg߿1͟Z(Ktew%5s+|?z#U9Q=S-nLV[\ttttͧc;*k@mp},6.Z oufSX jſ脧c 2^ѩ_#7E6.E{aӠ C#&m4*w*CP,9!)\TVG %OI;Jm@A#ȋ",dOJhjv1( @+mUWacKxKr.MZ r=`Zqn>7v9x癶'(H^p鍏JBK>k5:G\U7.F@z͂†3k ܯ|]MwcgˉRv;] lTVpZoղ[Qw@ҵ]~NW+2)dyǵ+_r^O\űfZݡe?1-TCu"h> UVTsO'n9k;_}MG4nz'19u|CWw_=:H>~7U<Ϸ~}.ޝ ǃ?41^iȽ~?S^|U͌y#,Y8n(n7l5lOFMl2jr?\>|woyo-n9)Z.W-7v0=V?{XϷRԎ7CN5@AS3h1Ut Ņ%m=^~; I'0' rB!xͭ UP$Mݘ*n/w4m{LkE r6sX]<4v}2H=Єi`jGRA uG e:]qWQ*/ FIg QC Ɂ4Yѷes,F@tj/Hw4m{Tan܆ sns[pGF;D>#|;z~Vۉ2'vbGe^m@=׾WW6cEĂHbQ(ʺu7>{'XI}:R%#b8@1~ntmFeM[m ?[KIقs拳gNy` pq}2Hw,E*  p*uE]z'* on>^ڳf1YEs+d)v1I d"j#:RD\wSnc]4?7m@ńrv>s k1W epX\97aw+{Ds(@]:ܩ}Zc)u.RP.(‹L6A*KT4FD+dvഺE[S\^E[E[&m/Pr/k 'dF0g *˻d9?2Pk`=ySV !!$nŒO% mv 1QB"+r%I̋AkTi HޔR1Pc@6@ S!7ZERNLbXR0Jy~d2p,(;rk4 6o=ˆgDj*{Rm\)i+皴[+ U@Ni%c֛Jxn RG~-L+ZQ=p{ E+ZE+ZWR x֖,!SrRA8G8Z wAݕ,zoɢEeTG '20Y@4P;nVΠTפ-Ii\eWN*,ZcH$o*Qq7TiP۠6v6mP۠Av9AY @ +Z-N ttp)3p<,uuW1EnRNg= ,eld3 r働W͈> ;*FǦdi 8 5j hs ˅p)( ?/0,YC ?:;R 9"t4sMpʪ H)46IEJ9DnDeTN4Mh5:&:Mb#F@?_79-&)kr% &&+8&Jnģq [97Zsoplɛwy&pBYRp*sAj$&JD ӁXb(%jUdQ 1*|n'LX6mp۶nkJL[ yQMJaѱ?ZDbmQk{=]^$'Aֻ!W/T0CQ\Ex"q%*\9|޵qcٿ"cR!H<`udv>,EȊ8Öl˭j*ՓɌaːj^{.s]sG1Ѝn&E f^o Z=L >M޵)ڨP,s,?;yg_>ÝV1<>_~ks9n2ȟyxmzc]/'—%^œ Zgyi ٳGӏyt|t)`m>䋇7Zw?K'Ыxac"L6ݿN٪d{y[k/=[<o%Udy}6mzMO^׷ *bWJ6SMVs) _=W$mR%m*fnҒx/fh*Y]_zm?j`HzpvË#RtTxF[GWOꭇnYYو'٭.=x8?x-9y-Vf@9IZ4UKTFU%uE̟Kc%e%n5*XM5V ! _E _ wYL\;oGy'&f2%|JŻSN '囷v4_UեTekhwYU1QbA@_E@_-t=nNh~,Ehhg/?|3zX&/J?ur|]ß?~kC T}5T\@ĖLbWh폎?W(KUTYUH[С6􉖹*uk@*r{Y/ bkЏGcȼ"8>>>zxkFzA fH1D[ *4L@La`oЊ SA]8#zt>2T )Kl}i%*ҹ,BL?޴kۋxjBwOnr&~#@!5"9 Zjc 4x}*YlRIqV[(LL:{ƺERPffg Y:i~Bo,---pۢnT~~fF(7esz"K* UU;@!OasЈB'm2ݲ-U ԗF\ͩU\QhmMP,[Xaz\Όz9r<<{}!-\pmxsJ!BN7b T |tn:X Hn Xr=ɣD&Ifn(0Y^P*Rqܶ@#.ڊ X9#): 45k1]"A`cʹߒ䊴dr&..8K>fS "^4sR[ijg3_ o-j`j@L3ԍj34^I8th;V1@X!"KT9FTqp͂:p[A2=L*ig򽸞<=?^] q3\m~Qr3%wM7? ,}ӛGy¦V)e^luP?xlM,rGGvvz_^Kp,'`Ң<W:8{dwyJz[q8aԚH>>zv_pqzK Nyro3_MxG B a*yY'}STk)rb65ANb BP<=bB_fHz#|-Clɛ~> p$z i\{A*,]{C@U-6Bc rtpMd 2bLnT5 ,5L֐gL& w '3bNPb= 6>\ fIBzj-pR@ A J4V@+\o9*c{=Tb%[f)z㍄ĭJ1FYs̈:*yiƇ>7>^Y814&kD %?}XS +j{E5a@`6\*8kt͇p2̫EFކ f7VѢ[Ӝ1=ev֮'g{~fϛe:ے|ٵ᪽3wb\q[^|7Tbpk3"A6`8irW"ȧ<mPMJZJi9 jAVYwd e C_2e C_2^p"PUPhY%Qڤ,_pk*HSu%p GxҔEN`{fg("3.5w|ltJ'fܮ)M) ~">lz+}G48 i]z, !hu2:Ep"!HrݩN HZkR]*/@E0Ȝ77Kwd,ai!Fs{T,[S J3 A#9ɨXg!K@iنrD8' Pؕ!O""|L ?j$-At|Yh)4g > Mut$=OΚA3AJ[( ƈ<8t Q=*FPs%OA%:0%JNo;Zii>,ra`9wY8]+uFuniO$ɱiOIӞF>)D)7HFa 2OEQ,TdR_{ 3o-4nܛޅ^3$CҲ@F#~=1u`43s\K;K3o;V^5iZ[w\Ι:5d^;SqqqL;L 7w)PBcpG(|1y2߁V-ie|tvwlW{ԃۿ3 {~goDp=c:] G EG[} Y˒iKSb Xvq#dR$SGTMrx?27b-KE|(8ssǰݬ؅ى^7\gdrF[dds=.;0S`wa`n7F&zM46.4y/S6@KICBa=4K]%4v*,ݰRK䐲cdW6ck%K`ޅݫrB܃QSN놶4zu_ ]$ĞHRht$L)Ȅ Q.̛ Y62iOdI8U]{ ٕe/Z,U 4xeD4ږ eHfpSU5Vw\J*Bi[*EnA2PYW'uw{ה^_1'س+6}˪l6 EcTLر$©^{8DC0 ay3} Ɇ0>WT7a—D( .L$.c/6(ݞD ٧zyKp: p|}t)f&b6p"cK{1Eپ!WYف<9Қ΍M0DDa+~R|0UgD{vX##,mҾb'_ޥ\*3Bg0a&;|69;}3;/w)4XdA܆垷8pnMPGGӕGuxAL S!bJTj"vehDn]kfX%Q έ}$mu.=e{=SI Lw1RV'fH)c[,Ldkx<Z]7Zᢜx] Xb># ʩY^~ΥPTw'!TѕS8Np:'g6yץ((S(I4UҪ ) `]5 R"PϯRb-zC1&#nL>zo CU^uC-!_w ^- :[40kLtƹf /Vʩ\6#$K`۠=A)H*|WzqUԈ4{9#bT%02H6i>:i9UQ.W3@ԴY($Ui}"O\F#}G=);牍%Cyx_-^w Uv|PSן0bc[-w?H_1eP"}vfnp Iq?v$H-?DY oXE[@K\԰9CgB9!;[yߐ+rUU苏fBA`WW 7c׋3}ݦpW$~> ۾ѻkIA𒫖vpňM{uKk[`º/{ֺjW0jƬ0ZvbQƖքYamVoiBSmq%*L&]4u4ZZZ*+SxH|.tskc*, y[걑ڮUǴjm?YeYwRmmF] 8A/./K-mO޲i~k1o]O/q[7s>y1gg=ƥU;Iz8o=DX|(2XXj>LBPdX?_ ˧"i΀K{5+WdEΟ Ҷ$} ݝ#$GNV y_(:H2&)B}h)%J$x8Hy%c8R{vtqV=4e01`XF@j_ىpMNKBT!f/Wb/Fۃ>I|.j?'-ce1%MTB6tE9Py(ϋN=HI]do<PB/rMvǣYwvNu1}r",W #,J3`'ۍ'C'! !ls$WdЩ/A?7nWnTun\:_ci;tb5qW5~]VZOZzx$=\6g@^Ap7.}5T]TT YР̢/c+eCQB\9۝'Cփڻ2eMfΰ[)-g b(4NI)HbЂ2ݞ .8[^qf:V?K~My!dWɍ)iTTx=8{<&{Jn;N[iI:_'viAR.-6w2]я_{ē#Hf?W|a=xsW|LYRMXr"W| 䮟H>,I^e2ݷcZ2ڃe3wrưsw}ݕ7?{aoK ]oRӚ0&i;<}V>Q~lK>)&lk> I*%;L2i9 Sa7ΊI>i >:ܛ+JGWjmoݥae~NTO4&b\RyœZ$ Qz^rEKǑocL`2j|׾ _;kst/ٙ'On Z3;߾TK bJĹ{hQt3{vgJCByk KeҐ\i鿓eg$2h̰6 KuԞ^2pd"\y }F2:qvฮo5PL"=͘9oJՅh(9\F1G%$ ĚY *zo'2aTK%y{٤S('}-㎌뼡M??>LV'탟~4$OK[<}rJL)S2-bg KА- mʙ՚S][?|b5d]ݛ<%oRwl pvi#kr5dykKLB*B+}r PAI5YmYyry{j'> =M3p2 Sk$H4090ߒp.q{f~O d 0 sXFdj3~>J+9sn_.8ynZ &E< e&3a|U )Trd/+;,!S~ImDEVdq2aT^q '#A&wN5,=TKBq\,˄[AosTH' a} ׉i$<"|E#&أ|陡Ik [!STA0M$49')OqHrCzW6Tt)IӬWЀ VQ(9 f`|פxKK@v%>2+AJJy; 㲄ߺL|d@_#""kNЗJ'P H^z-R0ItqDOp zXz%.Sg@I$+盙۫ZE+],7c+f~MO;T@tӫ#yvgzLB7g\׭SWdQ6զbo R`1`Β @Xmyzc &}.vv6f A|JCw}b?mxoMMב{ Mj}ieL(BTiGW}bhڥ~\yO " $JX c|*Zi >mۃvy=gut1[4^=^Ch-e)nAk[>) hBt-Z6)n|GRA";?ZiXBh&/e.o~_~x?}YTl\qrMVKm"ѻIhB!߻f(H,nA.B *ɝ.2 (YGchiВȻ1zyc$˅@4ˇTw;E[n;))R uf3CDˬLUgZ :/Pڦ+v=Ed?XKLZ6Z|N[)D]!|jŸ,#Xn.kk m7:R¨5 `Ao3[Ofbu /~Q:Ո %: S< ܼ?mB+ ܉ ?D@z$@OQ; W8Dqѱ'ΰx'r_ #~0^}\5KJOeErOIT^I 9 #sQ:B3]p˧ZtFi7WYk,(@vp9mgrt&ǽT\4g5Sݥn@!0&/M6Κ5qgu$"&# yP<'EJ9^} WM^ʫ`"fa[I,t`Crk)$HcK@.6[>g@Ńab5,% -/5|; v6k8Ɗ碸a9%j]ץɛ7xPc;%>gb!G_(9m<:^Ǹ ~2zY'.T+TFtPq&q+y .j0?iTr \wjGcacåh78ZG1fK-ID?yQ$STxi8q"BN&vs$$<s$Hy^hyS )KR+KM)("?^s8j~ D|+M9/OK[Ͱ<x9S8pN᜖gASΘd3YĤmTJ 7B2 x!(`G݉ja>N?TUWlc)3tVbkjtdk[Lq?]ɚ@ZFnn7d}xa![b) 15WZ!g &J8EnLfs&\5E2x)2@)`m{M>4I1,ёF4hϐa*"fBNdzL Bz|w$yu2IAqhɻ*?8+I6F?-2Jjx u#{P"AYg$3>HC9w._=HʤOȨ1b|˷R+QŠJez?/u1ېPMPH':F 5J.&˳D`fi-C;F=Ԋ3 ވjHv y'Skx8Q6hOw_@ہa?Fu5q**6? raI6\˚ ֺvr-*ˬ$ 80L 8Q3KZ(,0^>4R Gk ,oi3s-A sz1?~r4ԣ&łz*2“bqm-BTnW+*"A1B[ R4ZX%{U%ME۶.]2 śR*'Io8ó=ALEA$a!׈ܠr"j`92=;=3ldu5X(L85>=@JxNu)C|uFє(/VUgd^*2"Zۄxpa9`i&66a(jNE0;~{v CKm<)"|qҒ!MJ̒XTV)x,7P%[VpԫPe{,G )/ѱO) 7)uTR@B(`.ۂ?3kHTS f2TR:O7t-V^Rqsi}66?˦a}w(ڃ)/ӭk'e:>UJ"+wpW |U|!ϿV':%uk6'ś*3>vK`TĶo: `ui .ޘňSFjD 6tgJ_h-Zu&4"d]kY8u&qBvmɕGas\/i)KWLC0hAKdqHozf'q@Cv)ʉ LB#pAED0lb&$W +`y{B6wh,r գUF8[JȓNwKn\7+`u% R(?~+vUƒ:F!jWl͝ջ5#b;Uoې.X[xCD+(+Z ~Be%e?p> s43!7."*Y>vZەqP2I4w42ˢnt!}Mg{tbj_1$6;w|Cs.Q-{0?^pUJJw\&ꂉNaM7|wnȦaĞ@yEq/|4k`.=,lXi/7Cvb<2ӁzZL^|WAd _&Iܛha66e-'%eqvGC(?>xer|!?#G yimѠ5{GJ܃k͟eFz: 3:aL~V'b/Aqnv|p @YAvj\\SO À{!U$&z|䱰M6&ƚ9؆jt!*B2jȗe"ڢ؍ĕZ[i4t%i҂uJ8wa}LvRѳe諈K|]63гr0`,G|w=|ŌK=dy(b`dJyw\K.S~nW:F!{J.z' ?[=P/cot8ꪸ5Ns˳*!8n!Ѿ"_M=(/3*)cӈ3B.55jW|0w-uad'6 ތX|[JnknL;y a/0;y㊀fP6%g`I;2ׇ_wCz .w*_Q0 x?twSVMy'+̊}d$PJ^qQ~s YEq8R*l2lY< ޲bV4a`*?uzmr}sHfKO[%9f8<>"f^G}mZǭgA/wfD{n((t[_q =o7vաLM 3#P+D=X,Kzn=H%eB0$4+g\7n?s( s4E-΃k\^~@Nɣ]&W>\?3fy`~prM8: fYnz^/oN~K>X F4WdĖNt:׿ޛthΡIgWo4^{ > n sĕJWfJamxeMGD5/`:ON\= AhMΓS s9u矧o~w9k/ަw+MVۨ4pB?8G;;\dO2KƅyMzP{t3S (FNAooYY&xip_ %:JWvR}ѩ>7q0KmBOp)ds_`s[]2njWr`AueJ=J3_s/2fj=ǧǟg?]&*h%gIЁ#ʸ!XxhAu(A1cyɤvZ k :rA+@t 9E^he@ ۭ, hL h𮬯28.6@c ߦ̺*ux%R<А| Jߐ;I(;-jn鼥9o>SW!zX/PsK X08ï7LY03}!ZG֊D+DQ'#< c^}1C9MNy~E(<1(X7<|隻0v~j(`hIg#%s;4Y"Yg%-a 9}߰osC)wuՠD:y!QsOjR6$y2! nɴ`1 0SKY"Kz>ڣ!@hY!2M?9f^S2]Eʊ)Ja$3RN0܇WX/?:xSi5pDG9pypER 4DLƑ4 LqF`VH$H0ts @FI̶E-2T b+/#E>#:)E#E8cF2PBB00F"Bm2cEVTڋy8!['~׏z /ӧOlq[͇[I4QŦ2f[\|ј}vs]}*9pC wk?/Wuy1S w<+t7VPyJZTZd P:+lU%u29X"3~4y[o4M4TcL11nכxRa;:M{%/znJuj$$u+i-}߽u.6NP٨FS%cMVV 5G"(B1 qRiח>[ՠ1GM}O*YO'qc@Crm [l^8rJrVì NdA+cP8B("/d q 0{~,& hFBh@M[ hǂQo>Ř^VxfZN냋2(#hȺ,|_'TsRv*;\qYXhi -ek֡9g 8^!j΅!:X %`n)8{ %\JkGo x qzPeWOe~K4>7z:G\|z@Ϭ1@ >&$b3)R·3[L!\ vCQBmsaoebޡ>CUk*\3(C1F>F"Bk5l.'(?Teq*Fbb0$ C Q!3{(̞qfO&]>=g3;g,'̞IR]貼۳7^~ٝL.چ`d~?z-2`uP{udfpHI8?_ܝ'R!)K{U<܅CYqko~bX3OZgl-BS)4uBS)4u7C*HP-B'o6E0 r L,ml~4fk糅1 ɴz]nG7wi{?8w + kNtTsy:oə|znLqJNJ!hɢ"k1@RB\p(U jQy8: ^h%N$2BA`M4G9+~帡!rҰjP+Ik`.\Yp-.PT Bl24h#iJ.e ]])RԋXo'ϔ\7OZ9:M$2-鎟w,/׿KָJh)J-` )pG\"BqWp!2TpOQ\AP+DZ#F)iT DrI!ې~E6=KY$ֈMpdfZD5(1 FGG5j9gR1 .Jz 3tr?u90q_Χw<ӏӦv9zfNˊW\|LǁӁ"}`/;VQPP$R& BGӠ f652f:g1i=>;EC*P}2\_48ElVS0hXS'9?L&8Vx3 P}J F ك.aJr'D~ٙJ \'JI%MzfvކM΋RK$F &a(9euUGiIb֡PiCXꨜB@. ڋ.K gB2^>phnt"/9#lr s0J -u rB-Ԕ,Kj2(u@%ń&.O72E`M\ [xO}2 :' .HKi#:[D%AG(zCʵB"j1lA(.]Uv#l|Lj |?/n|8#m PBlh2Cm[suAǐ|\H f*ziG77nS%zAhG9q"rm<3~7Hcr:&6JE7 ޺A)e~OjGLi8&`HYQLj D)T%Z -54S¨f nP-h  8c 44]6Q:th N%qR:F{U(fjc&AN&)JAIJIUc[[7] CSAat>Q'r⌍mp@UDs4hv:n95 5ͨPRBq3"CRS+FH%PޚAy& k@"SD*C@w8A8G]RLYL Hɺ1@/%.Bf)ARD@RTш9ĄJz &RߝtU6'& $Տu%417t)@+ѯΏ/TςU*N'HT@3u&TR0B)Ԉr(Q,jF]+9C@Nƚ4KPdi"gʦۑgPF&r&]`_# Lb6PBh.go\ `_kJ!Hgtr Rwr12X"FRmJ}6N@4%BfR?&^=?5zl:3/.XyrVDZo-ZEY(y>LN 'a717mәPJvq]M,h-/%)5[ evANR7o0왛ϣCjSi+S@lv0J>9}5˨Zby%TJ>:9S{geO]pgnnpt oCGO!3ɡm$=q#` bO,7#g5 ]1U5 ]Sbx{ZRX\]}k/k ݈" ݉ ~ʴ:jT"ooSu"pfC`(uyR bl/g_wbMLqVz/^t^B]`Or"e6_? ,mx3JRm}kC/F Zs'ޜ=~/Rr-2U^[ťw@"vDAh {(-uGTU8xf\j!IZw,: de^'׍3ns-6cTrXR.yri!M,W=z}}2JԶ&d8owoGv; q0 w7p܌) fQSsLxdO/zt9}a';^=mBy6i40IQ'io(pWj0`ܗҕ %1\C83)0܇YmP+u}Ջ{>_^G?>Et뤏){Lp-Mahow7?=^OK#*Tt6[> ҽCo9 );7verL{sUz6bPgk9ŻOPAb T'M/oyXO $lOȿ"Oqe=&cy.]@)G uJ~BH~l|^o^5 vӧq<1Y,gt'lW6O􈵷$:Glov+f58}Ÿz傢7T ރ}'׀֒K_B٫^_ P"z.8h3Z+ݓ5ts2?z52qO-Tx; ӠE(0&nOh"̶ 4\{52ٜ-:\ʵYdiLImpVze֡ Y(=JAKM E'Q y&(\6N+hk0~x<}Υ&5a~#!k%/52gRQ[DRP2,)A`?P\Г6b^ZTY-q0g Ku¡vd)I4 e^dZc wJ&4=h`l2-L6(\MrQ{_Rɤ.];vtsw{Qv#yWщYf[tcg{wzuY_}*JtV[N>/5猰aM10ƗUTٻ涑$W~ݘ]G313>f^CQ昢$m7 (E(Ҥ% ($ʳr È_GQ h2}Xʵ-F2"@)TD8c(Q_ &79px: ͝I$?+k]n -SzW{3ѯb6DXҀ7t~`NL(4HISrzzm^CV6֞:gT;B j"D( &zGg弛 L-]n}Ղlk bä>'מR+nuq$:h"r8]nED;:Fe9ed*ȭ6h~Toa|j#Dz8=k#-OKCɏp|w#*ib?zu[^<^fHoPNa޿~4^[?yv:/x4y-!=c;e)sO ^3#ܾ3Z%LIx1bû.a)L&9rAihF2g&45\KM>x܊Fn8)W[N0(qtpQÜ! AU*; @"VsaC")J1:c=('?K%|VQ 6f` Sw#Sbjqh3ΞQZ *Ś'O>#f-3B!"1TZ<\ -h#!+Vk*qBx)ZYtn\N&h1F -9ě`P"HEӆ,>B-vΣ9 [UoI=ɳ63A;+p*ET2_!%Xmu>`(N1dqlaД'lA]g"㪍HoUW"R:cQ(ŭ Z . ̫6OoأL!S8@C2 `zl4MSr4haα*H*rtqErCm2LYmW\sÚ9#j/Q 9X 4z۽OsEO%Kdq 5Fot&hv󥆻ŵ8vm%;(Xbs]-:F{9<ܶ)'{? /+?Az3SO HhO9-% -feJ䄪+T_௖&t \AOz__4}.{_N握)'I/M{)/r+Xe|w~}Ic`9I &wEĨɥ5F25t"Tpoӓ{Z=\^ibx|jt _xbv^$8VtF+ Z$Ey(6v4Oȧ`;k)@[@ @!x@r;ۋAZOՋn+|pݦP=^_ձ #|5(0O-$8zw}WÅV{\WTJkmuRٶU7.yVIJ]ڴGjcנnb\}QW 8e5 ?rֻ*Odz%Mσ5L?;ˤza"*>t+W/5zK3hm2٤cOGk*Py_h'G%Rș܎ą/IM  ͭ6T5_]qg|Oֲ[&&y-psnMI&ͷ?;6LǥR\Іty\kE^DhGZ&I$'\Ei| 5,U13$3*AzEqUp¯{'|/uDH3(uy8YLe87f}uX߷~f1 Ǚ0Iˤ%>,8qBnI"LHvӋ =,nӋmv5< $ ߟBwd>-.ɦ寏ΥsIJ*NIᥝڸLxG(aW]v'ꔒZ\'`㪥b}qm 0T=7ƗN?jsqsX_Wv0_ȹyL^P9L+楔+l4A5O$)w9F%K`V0 Y:$`HfG,V(xnխ҈Rv}\ & PDv}Ьk௱pK:?8N4WIo J3v kheE-i -SbWs!R^[e!wxx0w&HS c[P. T `6VQp06Ya -&RNJGsRI= 1&NЖF!92`S_yzq{V^dUL(սKHދ[_B^_{*+fE>4WAiP Rެ"/X1(e @[L%Oʩ-o7֜=JYXԓ.; ̅g tޥoΥXv3?_FbSk ~zj9߾Y.>hG۫4 #TgOze7>mS]_0r~'9[l{Ux1{NZR%ErS͜Rz̩n[@3Jԙ:mD=az4EJ.fȅOSTg4!\'+-‘ y͚`o[+5\'sANqG_H'Jy𗕔ޥ`F\~KRDp6 c䋈 ~7WIqWIqr\a9n+w#B;EѸH ,d)8x`FkԊ|_ jꒄzwz5X*F(|7$@¦ߘPsÐ \k(@MX?_^#g0t%.0[.T.Py$LW'kAkT"_˜&λYzN:XAuTAAeZ9lD%ԉxw.Di\4ELI>ܢA39olf숀uéİk89"Q+VRhg (iR9C&BhurfԍqFaCb%lR{ AdGT*øfk^ˡ9̘P?<)07 8Эw'ToSQsa)unwHnN4Bާnl&t*yΕN$zTΎ]}ݢi}o/r ԣ-ZM侮4e X`(mfZl zb@L6AYA/3X+R& TCf\9u-[ $sBU.X5F)[1MukDO-}&W1f珇.Tv*ݎ.p%?GEWG|P8 (\<sU] JL8V/@Rr=\@73iB )eg6,Sَgl1h.qsX hs^ 0NO6HjU>f\rszUlpIqOF}rg;mիkϧT%:ݏ a2/?{zQ*Jzw~0ݚ; l5׌sPy&נ3_>wvSTgT^b|qo:񌯙 ^n$=z3$?=l9d=v_:hI\v)U[=8RUE"R\_`HJCr(bfj8hh.(mh"JGvoj(<١STʡX 5Ӡ 6ycN5%2HriZzs $rIX"ǠY~#lDǣtj`x<t/s;JGhWg#YNĻཉ};gn8-]i;Kn~2 GzqM>Z)eՆG,zzR'ĺW  ĩ=y3jfLU$q8  9!w4 8bɼj˼; sD$ y=g!$:=әXFy8#G((w9(W:~U̓X*<55ʍ;9 `$"ib` ,"6lAed'n$Ȼ))Nnܙ;PVa!O;oͪ) 'vLmV |΢.%!R"Yu@^M9F@T/۲2R^Gs X&e:'iE>L#LoVH-Kxز H} ܪK=\>>C3@9Rc NQ~Hk%%'(0MQgWa|م90R7EfY~1ZO~F]L`LTJ߸N !1nd% c>_.9$/(9g@8G*D|CW1D|CWqY,A(,:.=ҜEӘi SL*Ȋ {7t?w,?w8Vߕ)5$: #^9WqUtʎCYUPc,pg^KF Q`0Ƕh"Sq l0YsʒhN(;wma0r6UdBNTi'nSnKXrAMs4,aIsS.lӌm$I 2 SV ( B|݇ 4w:\Ý\?-Np1{60[ BaXD0Ham8)@ wJP@a)xwκLL.se0&'r“׾U `]m1RYr<$2XszR¼5 pdsm  RAZs]3kCIq*|3Sڷz֪JcY +1GF!oz#H2$PܟsmtAȪKIY wPcR.V0 K,"M%1dͬ:lPeLzAМ a 5\/rjf:Y nÝ3˳@iWGC)w9a<$x0\uxGgusbaHke3YO8Oa1zjh,nQJjh2.ë\xoȸj9|)G %%HZR$MNmRt*#* z&r^`-G5=>$%h sAhcpV-> IG6&uvQ:JPARd vA O57"‚^u&A]PSa' {'+e+ɕ Z!W0K뱊~e#>@HSTE'-1<͝cRڪj٭6 7U`t8N8~KyT f5{?<^L 4(޺'!ZOT)5*b:Yjp=U4Qz|"k@bhtCGP\8,2*a+{z+{8y3B9`V2c5j9 )ƌTBO9X#S⹱3dÆQ}1WpsCHK Ut W߬3˶tIlV^xyҝ_w] $KkQ9{8 @Y XtfMs-pwL"=鲝p91 w O{<rbh}bzD;eՉP4nQA>CLƱ '=b " 1ꎅ.Ydaf!6 .9.YpYq`gkg;gf5M(5NJ2GOwIVq "*Yw=4Oht{f5GDMAjwG%6֦Ƿ|1IP5_o[9Mc凥䇥=ȑ@} 5OkT^yu?B>BNl'iݲG%"i<ӃYy K3CcV!ayyUp8|4$~~X[~X3^*pÚ53M,Ldaýf!8>Lrkt뱚^qqw=<Ηg5G`vs'@5nvl #Z64*8Q^ E򔚎!.^cA%>^MR90xN0:LH5BIאc^,afl² 2)h9L>װy$^6ewkŜ#ԡ~MO{+o=TnAF%+l^=)jzr&' ZiX}-mBomQRϝ](_=[MmOG$VyJ=LSU'. pCmHt&`\bLhW쬤V`/-ls$.tFm! X̍j]'1B1CÇЕZ=-=Rɡ!qDzT*IV 3*OO kVԆ='e vB<l39 zƤN.~9VOJ3>0N\sm qY7Ncy2X6‚ JHKPs瀋=gĮ^z pD`͗!VyhNRge 1IYݞr ̿ rUGNSZFJLApTSQ*+v3.¤8~(*tr;XDLnǃW;LcH5`jߓ Ӻ o&7vሎ§OO|xY7xFN^|;Ɠ 2Nܬ1{ϊ< E|M' x~]0r}N u/jҳv+o\DsdJjniE[.);FvD%uݲ'ݚo\D) rGRL$EkL1]Gm18| F+v4){QKiogZ-%y&.x;3n{ǗUz.cIw^ ]:=|$#Y<93hI:G5_S}W3V  Y]5Hd>Glu SCxTnDPrX9 Ɩb $(".dLNuH}jgȢԿq*wssMv ŋ5}q3/C^Uaċ ڦ?77q%F[50ДV3\z9VOQHխ-D ~ a6]c6ѧ"V*-{f:-_ eERq.VK&Ձl9T*XԬBKfv̫Ҕbjf/ؕxwǮRuu U 7ʮZݱ `Wim]iM:دx)0Fi*,h 7ľjB+ә~vӑOCK[Є%;Ԟ|{8:> lެ?) 5.Uo_5UWJcJy* {/66ʏE,#J=9 L8QƢ;l =|fSزcb@Qɘ5A8k59j J_ gS Zg蜳u<#/WJ`k:y`'5 y݂Y $e0Uʳ`C6Rλi"#D4dzI!#vPK'Al6QIFYaa#8 Upa'9uqB+0 !ٺאZ, PCId#tw9G^3ȧD6?Wg),̠YHL\-JCbBِ0ˇ <>W-PA=-zf닧ϊ/it¸w4l&z}1ldE- xYW΍0|$ Ŷܾ D\RJ bLҋjn>K~ ŪwJ$rSيD@۲u g<}ք JhXtȉV^ mu{W;ٻ&m,WTMUGzڤrvIuDn8)(JjI=yHKH c6f>C :['-`|q,ɲ%*lMEȻWӅ.%?A)`'YLtpQNj&JJM\u Ob3taKN薦+&xq}*4H]|U6YJn~b~VWBc? ;l8J0xU& SN?=tܿ|`1n8jU fnFK!PV -=7\*=ጬ8E}}d ܘ@3flOwm)P 3T Δh@ܔI[F9Y\q oGxܐIؖK{gnKPKWfb۬IT2&ovMGNA`}^7;Glv J덻NhHCE;B@ el;|Tdet{9dz-J 3`Q3KN0 H:/=XS$N"w).|2kᕙK5Uw4uMOo=fSL"s+ZЛe`E$c򢞐@,/G9BK'Yڹ6H 2 s2 ̈́量YR+I/+֝b2BR3j8(BLf1g]8=g)p$„* F ^WxB;VLFӰ~<ƻ ' }_p"S:LU6Qyas˴]PgDY+ B0dJR %"~uS32)&Fk$rkM.tAs[М+R:g1b$aBF3#t!kToSϠ*IP ']?eEc%4hP'1T렵qI!q?.)I6TᥩBJozb;FL;n"1m7R[ݥ56$F4hisblT) @jlP ,:$0R-zS0lpOp[S ?K , f]x?>ܖS_8# s'x9G_`ԑ띏 'l]%_9| X̧0x.إRj6yp4c!Fˉՙ%:Cf[OqOli4}X,wwlЊ;oK4;M_{=,-EwV_Р)@!E4*darפ|TWm,h'S6TYj7T-m[)KQyqXcc 99 eAW^nRE0NCRXJ4F"-֡?-T5ߤ2p5YlOXC-O&X =QV'Z[a;^۠$Q B1L$3縦`EF^8`*8nqnnb5Z:rf;+%A=բ9nQWQZ"3)۟C\(!+hGX$ݎsdͅey! iqЄ:<'ž UC #|;()rj=0L+/m’"҉gR֦هWǫ:kBYI~KNDR]yl70Oʔ hFw~bsƃs;'Wg_5{</݆S+j1LEviS+` a,d"|o?_WrZq}ꃮPme>|4XLo}=lGlVEAA7$HNB!O=)sȅes-8r,QAnB9n5#Zb޶צaZN%^qVp k B,;͸ eF@B?jr1Ò/QhPAȽWbs SւX3^lsɠ,upeϴ;}a&?iQf7kĽgE4L9m0ĊA"P r_.zDHB()#(\Y%5Jo(NԘ`]_Ujfuǭ3z,ϽY(0NPQᡭJ㻘T? E׍( KT , c|w ݗmS^$CNl.XEv"CS&5 BD nfdh-g`H IC f q*ceKԘkYAp?'LAOwfN-\PҏP㩯VBī6xk<j!Є%Cw\{pj%t1Oi\1'=Jp &|S`3-OESӨKGIyҡ^YUNzq3(S=n774TS* LS)T4ꐃsHք EG\Dfl9:tG#80šuP<4Tޞ9mk9#vQsƈdm{=Ѳrh9/\ |ro# k19h; ָQjE`6)AQ߈PZW?~rb߄a~M7a^MQaFcR`&sXSk &D?8Lԇl 37&Ii'"+3&app[\5>^Gїf.ϾH<c_u>-t^Ip?3/J3Ǥl jdDJٚ(4j/!9)t/4J+`#6w\RxωWK١qDzn q|Upd]ld'xL^<$Okܧ^a8"˩KGʉO4Zi~̆P'%)8rA#+9AB8C1(\~h}4+4,F:61+9ԊCs2BNݜdjrrc qA( wO/.dL>Bl&z`/7g9?ž!.߃3NZDcR?DϩF:By̽'|(@2!zj=7+hVOW8Y'չd_4>?3ǙN_Hy1,y!Եp#7́[Ź̢3p'zoo4rIǐm:@x>"s}Ff]+܄,0:|P :]1B1+Zsa\N̡2;^dq,R=hRA-J('#GLΩS|؍nwuQ$tJҢ%obٳKS m6N9nC2e eId)޼9BɕcX-}yoEץQlR"6 -MB&ǭm];fh#FWvA,;'&I~ z]P\7Y0>qaޟK] ) 3!b҅%lUbU%n\G$uU^c>&[E !3 AE븦oJ3|v%ZGE':֨i\! i̟xŮ__wN/~柯8~:%_ɴ|]GxO}F/*Ǘ/돷ߟ߾}}j^&C|~K6 M6e/_mo?}PNkgC{盛~aκXp}91:qrN"q}FkLS/fC|:uk `mc8xz?AJdv:w!CF>N&~b M᧵6O^F]{_;__n&w!jSrtr~\]W~m&!&)|_P-"_B\}Eݟoђz4 w, d#GmJB9mP(y7M-d9\cTPrwm߿3;%C5mw|;hkM%(Sk_քbF_\DV'g2v0u0M#tƪ[/i*qAv#&ߌ7MoAt}yPyMGI+Hwd`=NbO}ᔲNv0^ȑ(&CъZly~ #aMM^t0{ht`Cvo}POmXoE/SY-6*?L_~1>'gf g/$Kx@/Fܖi7^PoI&>@/={Q isnhGv֬Hl;{y# `f%yF)^\r\BqK]^rȠp"P䋥.翅Ɂ/zErU| Q Ma.HIAo/O4C|BԺt}axSA-E ?o?@"/ }h=xKI dm?C^خSʡs`k?T_񃷸A̻H ؐg8<(^^JWC%;!&): PZ:R.!Q29itKNm\m\a R 0Jm0N#V֝bRVGI>1J;ty 0~4 4 1<@CBײj˖n.*" ֜wX\hamYTD6E쬋%mR.ۤuYaD&^K[²ؓL=1!g͉{Ǣ*/:{"TՔ:x(tu(4T7^(? w ?1ho~RZLUq$琏[^jXu*6A-dˤR_~Bl7^׽x /٢k2[ߚhchOY1@jhn[CUl?x+[j枆3qy@YzG@)QbvxiSX\lfNZnx}spF?,zxǟdHɉ3* & %72.i)v>l!GqP={֦*8+k%f\ŹH!X7opS_l-0+ʠ~[0x`m9ۂ^֒l2:q0`㶗R>v}`ze5E&ZǻˡG`8L,c1cS>6d~u QKҕOTKm_z/~웤Aw+)ݍ,n +g 4hWgw>̥0[E9(Ԯ2~-}QDZ0obdfTL0ޥs"j%PĘggh9;FR qZa͛8H&;qfcH7T(nGP|2@V{uYus}%~gZ95 s,C#\ƳBKc^N)bbCCU 615*Y[TFrظPlw)yVL0"As<WpS~Vg)'dzʞ%o!r nf޶}f\_5]p_Rqgˑ63M>5e&M[s||aa;gO.z6.eCjt)xx8J/ _=G/b "1>N063 %߰QШ >A͎ol}Jٙ/_ڋpPvN@ FpbP}]ԾEPBueTM5- {d.o$#e$9@T3b.ԓ`f8Xbjץ$ΝUV(৮p m:M k΅NiH >?RfKR0I4y '0;䈉HnnGܢNu2qŧ[k9<+_iKJO>,j>A"Hv:ɼETůVJ#P9P\tjSFO~DH&nfS'DC bK`Q)+t.x.dD uOY\{_¢dwP]A-tR*;''F0'us>]KLO)L>YC4X7Wˆӹ[&- 3bl>[EIHp35kc 1鴯yi_78%ݯ2_&+W6D/~-jERMt"Xʣ>72Ǐ+i}߻CzuT1 zC:`Nkme]'ph_MĄG0tl,wτWh3GOѬ;\>b6B-cӑը\'lr\/VhVDَEx’i>[؄R&hNBhBN"vsƃs9SE3ATFPPEKar}1`(?bɘj˥i>WHRNu:gx8 64qyk;sHJ9]@e,UwyZ}loX(%f;05))CB͎o<|pK &GG9"#M(n)gn% s%Snuy>?D(?y;̑G1g?~q6hIAtX6JOqY 29bEe5l&c5c-)QXh%nō7nY#/vx-0d &;O7=k;Iv8mGݖܔ(^qfKWVUOQ)Cfqˤ[e Tu.*DDeJ2 JOeD>j >|ū+Yz(:,"upę,8 ,>nw1J8{ڷOTg<ɦTw_o%=" ;`+oZ\ՙ=^Ael O$do:ژzl{/f6"/`!8!R/T`fxڼHPt>B.۴c{z>#Hg}oɚ7^I.>>s6Vk,MRkӳ,6=6{ *2*+*mF"c% 2)7Z^J%" Id|Xcߛ77(*li?)}ћt$k_P~I\ ]<^my{~ /;RQmhv+*{ Y&k)ji^ieP F@zoxFIfzg^Y*vV\WoEDT9sP%D"1K7 7:h%%EE=yZRwZNڇ=f#Lʈ`OpJ)b O$CR< Ajg ;:B%jij~:QZN\N, *-QK%+"%e*_i2qT by6m~We "`̐"dW[LjFTj+0\/YkMdT% m~x9QHY .5P,T\ C\VT +n<Vv_ȕ"2)jiIj,TĪH^@%AnJvFfIk#nCӖ!JBQ( u]HB@CE`8" (s̨2(SRJ I)gN}>Kg76ԂKU~rxMݍ> _6Gl Z/?-KF>޿Umj3-/ѕ80XON~ۏҾY/h}.f;vuruOw ǧ 6-&A>|'|J2F}~= ehj)L#Ui64Em<ḓ#9U1686<ՍB(lApNQ<N@$S68rgNz/V9I!y4jf4#ԺU= x>c(UD)Dc&L4(PA q3֨ʩ<}92N0W9t,ݡ8Kw(w(Ȏ35JqO`>hϣ(=ZU0FERU}\sҗf`C@irj&sc4 |u@L_Wd* a33Rig)-,EzZĖ@/9 }sU ;(!ELEQTP[DDʔD{s0CviqP0Ol~p}՝yft(+}+8^tSoI𣈷m_X<2!-J&+5QAF*9gYش bQis~ߖbJG0)9ke@'!jP)"x[7@E#A]&4"q"UF 0$Zs5ςDfoƿYcd!9;K(C5hf3fF+VxVy˽*ւd {[꫁dT1ƐgI.MыtR&գHCPQh>E 4Rm LoصߴhJ 4ڀfI:REN~N9KXfqG .|$6fh # oGw?6cMť1*bvf{ D+)"=7yB8=iEinfL@tD86<4yJA.R ٰ٠mY7\W'o_C|uL}?QZ]\(>e/#*OAضK/gug 7vb74q[ӼAIBr2Qwf{%9k ٓShby* \I{;Y.>yFR-Z[Pa --M $c[:SEfm?4izJ"X%L['M$a3Psr^+ǒ7Q3n@@Vx" N-3䒳Zb-`RbSf l'tFAvɉ Ȋn+aE'4Gh JY5ǘQ8t}{IJWCuTZP0W;qoЊr9ߌ\1U{NtQ:ntu%aT& t *cН8W3⌁}Ρl7.i3⁋>f|~85p6Wl9IO|tR.棸ʏd9Cxq^}kYeh0㩌vrYld2^nyǜ!)N׫1z5gLb3Z6PN\eYu d qpRU!L5"KVd IsFGR65~\1ތkg^ǵsƵ7Ә> HPQ7%t=zyST1N\Oꆌ8\P>;1 0)׈z+&v?R 53mɉui}jʓՁ%lu CјjONOO'̀=~h8@iQ*\1rh70=M:R[̹S;t^U^MNE;D9Ϩʼn?u99B=/e@+§z93lK訡" iIˣz|S-/Ԇ1߾\g58 ζ/Ә޾d"zI4cX*̉w'bNNÆ9LMB-"~<P87{㕌!*c@(Lo鏉dg}Mj89* @q% 9##2ZENESXqn)2 Օ,NJXae΄QPZ) +Ad(B|VBȍ#ާ[lVJ&(eEkU5-n"^68]N !KM]#_VUr{YKCjJ TRYN+-*C#!+"*HV3g"M]BƇY#@M;L+V+K=MZ/߇&Ϝ&?%In44xrk0.s|;Gj;rviMXnpk/h"ߤlu|*,-\9wī5 ^2^nR'V~ǟQVˏWq=$nMA"! It4H+E}&/e^*2̻}8y[zgUj~T9U;>{rK'~L N%EZ2jRjLN H^ z^ۏ2ZJje6 Q^R$l>RpnSK@DZ/~2$k5W5?M2:ܪfȚ/_ՁUY?33ZִyfU lVlЇ?<t{Tr ٘^xH7.+?ncT;1_Y֛ ^0Q[][ӧecY:iދE,<{ВFNVc9ȉwap9n+|nV 2d]HMϡ"S;PfBk},춡8NR0)W+FRşɧ 'b$HÓnKiHñ{+[Iwѕ"8%aXxuJcPeWhC\{[?)AyEa|q]w?t˾/IߝFmhФ'5Tңjb }gX 2"+O7뿧 < ɠ_닥{}Iܾ{!spzלN*FV80S_A.S`MFc676 \;k9kuJ$SNr/u>B%k%TKRYXHa) #cas^gyW}LW~ NI v"!06}rj/K#W3V=`5:cՎԄUv-c4ׅ4DI:\HmH+@io3YYptAXLR>aFw JE;5d3j2oxD9Z2TH:յ `EeGh%eEFs^:ox!Ђx|;{>ogYC>bדIy{JDLuR v4eRObnh[̬3vC)0a::0-$}{|)ܢ Tx 9N(|eog_ d5Eʢ4$N,",KLH9~1y1GF|*ᄤTS- :}Y7 36􀽻g%VԲLliȄ"ML>W`Z'DRY@Zd,u=4QFO ʥFzX4dҩ f2Q+qQK^ abF:atmU ].yYҦ~=bL$||{cE/k0Jၣ{*+B8ڼ'wOw`6_ colc̶} AAjVn3Ov!q Q_@&kx2ZGzW< ݷxa~s^ ޲E5+[H974 V(XFovYLN#l[AGbu~,>M#%kWxw[#t:n;LnД#{Ƥ f+OCIqBY 7\ =ܴAo:ϏJͧ?jpYb&zO@KkܕgTG˿oJљ8*`R(!| %jFp,Y!W˃-(鮲=l*BP7˻jwl8ݍi= ʥ;PGZV!~MW`pHӮSv`}:lk2 XݴZm| FUvWٝ3穝T˂:<`) i82/dpV#`M#s:}-1$5VZ1p8)3u74 ;`M6Z&(0&d:% RR\r\}{W /Fan1ƞ~{0E(<мAۤLPS%LRS a8%2.0j*BDk,Ic(:pa |VoWdA䈰$[vU'RX00-RceHI8I*RXObI2,1RBΖ4uffUJWꆨ@&PPiw(&L̴QcQ'qJ~XqY! c*ad5ղ__p#_Qd̿Ky@xHXW>Q<WM0 6?` mkحjkW5@{Ϸ;:'Q:Zh >/)H|8ȿ;29) ƃE-FBehvQK T:yWhq*azV6.^]JU]nc5u*aWA^K^07|O ~vv 7Xp[ބQo2MG.v{InZܵ^Ü7xiKZtM5o5bG3y3DJ¨~=Jw )Ȇ]F@%nJ+efYhKvuTJm >÷{kЈqXbjhBם_'XO&x4Z/0?NB0B2V5; y0z6OX%s=I@*2>b8"2+cچf-KQ9nI$( ixdf#:M nAFR? qyF)ok/ 7|xT]"Ts/ yߦʽHyFgO %A)6Ktå}U/]G.mqcǦtVQM'~̉CGyF1,cT߻DYҌuJCJ{fITmqtbĬBXxwdYU/rJagƗG'<%6Wm+c٢k,$S!K*Wna40%KօH1;=)!cU*~ :VC9G đ>pc#D H8WXa]U(]p+ARu ) k@(س|*]O.u kZ}D v&i~ߧCz^ Noßi^[,?xK.TENP* "kzӦn-ẘ t| y?'H*?rX'J<VŽǒí.9PC95f9l>r3gPR^ſ o-Tzs됬Hr Z/!5v5PILz';r' /C# C43t.8,]I(RM]RKXȐ"Y,ARZIb(>΀ggI5BK>G&~>FR^Q Gpհ1N_? m1_pO ,ˉf9M,Bup$qZ/'^?{BRXLtJ1<ޤG f:)L*)I(Uq&lՒaZ(HTE% Lo]9q3:AYZvX'7;tCܽWy+!a\"*XkZ߳P3\Mc m&vd)[Q/=@%E"UWk'8TzI\flRfJatw-9L$q4DP:ZXF$ 2PM+08(kP#Nk̑ Hpm "IZTу2ǂzݑIj 2UYs-EKHdee•HmxcO1T<KO*g*{HZu:ΝNۼn6 6ۣZUxpK)o;P'$!8l;1Ldai&RAE\+<\\sDMƅ&!p.LrRBk0cեnGۋDBIyF֍ ,p&wޠԟ?f~ģO_xw}fh7"-1vVB_/~{[j̮3#b4R3~SǍDS}V z586su$xCRuH?!jj|%ݢ8qgI3 W'5#i()c!#0DvrxoفخKNv;|FХիw^}GLf#zO-dJsu |iQ\Jr96( S| T+0f %DԔ5XR.l4oVg7d _c8յ3JG'=%VTKZL12>t$M۷F{%GTzO$*Ia\ =vRywq k b5ZӸfxvMd0G4pQJ;~9F_E[+|R,Y.h[+jv'qS jMu_dx{7+2Oq*eIEjvnW"/žky=^*o?c< 1ۻd6}Z{/%ْRxqJ& yRY, 6Ha80G?M.+<]%uttsZJkwAkX w;.O_>y O^YLB>n  E;I*(c)gvR9)\/'] ,'xM<3x-gɷ: {ˉֈaBo(=az bm`?[h7YhMPM?2,Lӹc >"J/@iåAuh9A;#; -w3h^Ǧ rٚ77+9eւ$`OL'#s NPmL>(thTQfMmm%RВELD%QlR&SZR]R}\7x=xN [DsX"~Le]R3I698QPl6Q* Y'"8婍c${DĻ pYJkF;02ʉ" EB4Ǖ<1t_g<Ż.(WM`1z~P|ġ#Lj:7N5';/\Aetd] '$bIEJ!HJ)Oa7.8y߀G|W4g~2Eq1nH"O2LRƠ\Uʥ@ٜUҮ::NJ97띛͘'yFwԤZ-Z$e\2ؐ>Lyh_8z[g¡fJ| Ǽht;º@(֠b^.(/C~ik8@<ǠOV+<=0!@>wuyMtkfF*˶9)z-yje11R #l~BB8WHl='wk.:SgydpV 2K"&CJ}{uvo7$]ѶQNQ]3k!?Za u@muuT-qLU$"\mMC,PPtݸiI]\%e={F\\6?ijg)¬h򂒛t)ZB^ßr",f!th lٛr(> l _K/^FfoʍwemI i:'IpLXaQ0ꔰ"Ai߾Ydݸ 8E4ʣ* a8?`RMGXk{RaM6d +@.mjE kp(=b q֕p,X:UYbmnǢE;`x@%eekBT?VlcRX15HɷWrk mA[|;rR#|{i##əq4lom/{b8~Z/n{zN2ƔrjP)XZ}<}I7-C[n5@$hw%3pu$`[ό[ϭ>$zu1q mӹꁯ3i\+N!h%mXO.M19.n3ڥן.\s4 h5޶ZA&/hWTCur2 2\ɿrZEweʗeU==.Cy;oFT2_kdݕjЀf7W?HK@Ūn{o8i !(m+<q@u)e%8_EO 2x3"MOUӪZ9jݗiO:N۲!P k~>Alk"44`'&<_$9FV8`ZTYVCfmW;kŖvVdZx2d)- o̒g;ĉ =,eUz'RH3Rn,V]_{$/`H3]!RFJw[;}mGsEA)}HnpI;&I-2vmvlp2j +Z9{??j}q_be6hnڥ.I7VBYkqdeM)D9So^um"NlK}?t_uC)Fh*`+ǁUJ t/.NJ]MI ~UnQ({GP٢Ld1ѽb)pQѓFs^F䫟*u5 UY%c$谶+n=x?%Xj?[XAaD%j㳉C1t3z,%>Uy>UfR ݐ4z~J7owL׏0s&ywquIE5@ F !ɿL ,戕 y [agypb(Fr|"Ccz":amTs*Y>6lZFMl Hʚ|,MwL2$J3@V2:d P|:u.&kRl9c$+-T lly ɞ0ύ1D9Z /Wbw -wV@'AOx3@"gCpG),F+:BwR FF9$,"^oǿx >1x߽9!=ޠ|gK^^L6['R?HY m=K$X@KHAFKb:cM @&)|ӛ 6;9껹t*f*"Vp QG4*Zkh+j :K "P};E2'e30e ;haަ-њ൮6$ +MBdu#KKInȒsw艗_κ U1^=(tO?1 .}ʫџcF&sÿ9usu7>]_\DHWJd\=~syW.#%Zxz40ߴD,ȹ(x_,YF!)X(4 4BCZù]`I6+K nJngFK S\Zۘ4$h>L TS Fhc8t$4xsE`ƇE5^gr S_>;טc໳Z넦'2׹Ys˿4[ ^o: ;s `Md@^򌝆i ı{t]}={]Yy [A5A'y603ɏ]8)m :PIa̡Z' c]I~edt]|YJoݾG`ӡ^v52 iH =BJR@ɳ>yNYzUF͑\Ts[z,[e0 sX_%yAE+XbIu$-̙ U4'0#N̂C%myVN%nwR pillՠ3  lCMT}.ڄjw/^@h\,[Doig~G({\xn\&fD1EK]7nxvͽ-]mp:iP|][ky9Ju48:` =BhցhݶC7:*WV(2Y!`s<:&_1X'P'M% Or}Q9&L!@f],,qȗ{!2&iW:Zn0`|!2d9-d`)!Fɹ;fƗ/5Kwo*=wF{VkޒYD7~`-Υu沍/ڙoa+4';ßO!*2+$d0̑|;aQ%B YS ',XS&3=wvF}\ ӧ*EvRF }ouܜOg"!-? ߼nvl'@"/8›:)~z7>N9(:DX#7+X1-srbI Oj2gI M̒7ff 7R飴E_GoGoG 1lC7r{LigAv{h~.qwa'HfuL9-DV]dRF%&R,2!2Sy猭=ͻi5E %~hɑ+Z|e J6 V *c,84b)N&Ks%в9 0ӓSb9-&`t B(M69@ IV6{I?k92<>Í{] S^7MTޖ>6DM  )11r87"{mdzDbÍ #7niCo3tGT7jXf &\ИiXԟ\t?̠TFVJRJܽqMLRq"@e[ۺ,d,#-ø&TG14ʰ2JI_YPdj\S3MH<Hfʚ#KdeY;cLWtWeKOcVB À8FrNI$@ak}Su8/le'څ7v?De!hq8{ n;*^ ;{]막P;8QZ1%oP8;ZË)Ԃ?rBBx"Jz }&9_N6 ɞQ"=̇R=7 Bܔ8j[~~Wܻi7cmh@q #N>ՎRlQrDaƊtF,bund:d{2M@vC-F,M&(7n`x1Lخŋ7[h Jly*~obnWP?žwxmOb ֲd'^#&u:I;Hzͪz@/[щ3 Teho;/EC#Dr{ 0YCxt-hMtI0* ,CIqϹlYDUXfD-V’Ϯ9OKf5ﲙ*"ռDi8s\Ƭ;MM#1_.RCr.6Q(inn,oiΕR3 x$^\*S-ZM"Ԫc"`?jm .J?S&, LЭAX!N[Ki}r_9,s1 =&ܝ\( УX$'S*ۢ֎ orY ]@LaJ,.cRt J,mm]2eok47Ͳ ,EbːV;3ԊeU}l%eYUߵ8$3R8Q$ UeSNYM kƳ!IRڣ;r,FauY GRǟ %Ql; ӌjDy:klk\89)ld#I/(JgH^гvý=ς9BrJ|x% *iiyUFLη0Bcou-OPeJoiHZeiCѩȱK)0)e` 1ܔBV(!,2dBMKɯt3TUXJom5̝S@(g֫y0ʖ̓k+qbd T!Ċ5‘cf@SG5GSdU 73+o*'4o n^"ǔ(XXqm^Hs<klCoQqM^Y04LD?*pfԝo2dB #Mz;.e~xyp'Qe/?/wޭ^Lܿ{[X3ܯ[2; o7W[ u{ruvGoNy u[zX}S>4Zގ U".P-sیbyOGw&3 z[lϵwwpcjü?µ_&Ni4A0 ?z90jhxEzq1D=~l>}hN 9Sq>Yަ!^m Ain8Tã?-z O|ڙ'~s}hw~Z`7{~_O _^K|(g3ӻC|gm_C>W+5vaGȦ4ZQi<ѳ/ Rd1xs\b9K-F9Ǖ_C.U0PaflE^riV1:qC[D^Cafh V`< &JT0:$XCPSNkQҏ5'(ӱJXe(bb`pN A3)wxH~2%q]3x͐ G s <0KNSXSўh?#g[ s=39caYzDek3jE?0#fAO/Iyl)&Y_+O2Y`3'Lg+v2V V#MHsQbrSyRm" O妕Rj<Oi:3社8wd2HYW/qєp0h*cŜtx[fdd/&.aFnRh6!`d=Zr|hEzPkZhnN"(_o)tN3m5̎MH? Dx_ւSA(֗ƶŃs -E!•י $yrIsThNE !9*7^s2K-[~˖ۋ(U::CyBK@`$қ%I(s(?({/.'zz z . ÑCѺE}p%7b[O y@Dj8 DƃCzxcpȈ mn7cIB3傉 Fhb<27BSL` d1~L6T]W/f_Iӷaª۳~/ޝߦty8o.On?џǷ'ή8s_㇫?nߝj>ZD,g*INdHG y =JY.ߘz[*l_8~c!HF1| ]ƭ`1z껶m#l4 #8r)p^~7)2[`{Tz-m*FG[@|@'x9c` /_KߗA%,C9FB #hyX;#˗/:__Od3 e%fdogŠ aȀ%clyrp"z.yp&rk " ~w3pAh!箌}7Fވ{#~ڈ ΄ %J3MJXR5ؖR[GrlO0.e1'؈O2&NR2N'{ܽs|a90 ߓ5'Xh 1`-yL\- J{p[ň斍s#)X.;\M2l|t* %W"N%ՔR(z͔QӚ4=x?5` zvtu,IƿwTDʞs!Jp昑bZ>tQuW 3LTOrQEDLI94axxsQ)͋GK RI/$IQA-FV$I)2^ڷ_J ˔$)BtmY$ˉ$)S.J$IE1IRrxYqj}K¶L|@-\ f'k9j:D} ]_"NJSK'f%ԭD(?& $R$T-ovfML^[˔ڄ3"׃"P<iVYlDgߪkAc{W r86، ]gDhT;Z#%d"VJV^$Hx2h*|x@c~{‡7a %vL[=8JZ<6qQkBAش#rOTܺxuz9kYa6 ]['\['q*5ۿMFF;VgNN1qPZ:e?㓖'n?khSD?::!(7ehaHdv6~؉:F` )3"9 ;Qql~@#kl:LNϵۃھ5Bֹt&OH\zREe\qϱ1?i?ݫקg\~a|΂7}'PE3vIlӴEt2wl[_O/|`mϩvCy^_O"gOdgFeMrZѯ_ѦK'=yzÜr}~`SR0gD2Jgt7wT!ᅦbHfntzNJpg{b`#Ww=e3e[`` F gz}3]̵ϝa{ lÖ,c9&BjȽ"+50+3>uMOuizA`WCW%'RXy|D0C_x3d#&jͰ'8X%'"PͰJ/0"H@LܡչG1}wJxDvnk-hB8 S|[vnbʖx%QFTҟGY:{}32 G'0p؅FrpgU *IJ6PI靎$ه4J<;V/;a; ^ *TJA2fLduyWۓr#:r#r)8*` k3>>'(Sʨ"qMaA@"(܈ŠCͳOЦ$PGIޑJޒG\u@\Yosd}ͭGެ8jvz2 P98leO{ S@\`(ft#t)lU$M*Z-Ia@vӳ-*8} "j5jKo'IѮxNb6-'1Y빑2ym'v1:R# ^gz6N3Hr` KSRbՇ_ppluyVppmZXqHsɫ g3RyMiu\^yes79&1W0Y>8ErG us PNaxG1.LQI:[;X0"aB^@!# 9,-~!h@ % J*DŽ7UĤ`~qƒ@jt_:33:=e PTvøqk^7l߆3NX[4/޿13w "Yph4ӷ$ "Ի{2rܾ&g2Jb?# Iuu,DrӒCdb1 @(xK +@@L!d99R9 Zi<`=eah0r(R(5 S IID0G</Jq.(OtXHR_M-.-a&g-9[yA@%6@NaNmIX9P).j.wu+^^nr»qj!fCpԂ=}2BƐe)Nx/yNCNɎN?6!v|M'LLƆO_qݒwèv_Ǒib:;^wOߺ>:o{azj,qqYA88jNq:ɠ"j00}48-KGF+ )Rsch[c5@Qu#nԜBf7ɠPk"2w˷tֆ5Ƕ} ҥ-(8} dX(`rys;6b;(J7b1*QVQ.^o)Ӄ;) hSn$)ҍsX:c"G Coc5D!Lt4&{k47&Z x{D _tP\>Fu hxM֪Sм(ꉭ*Lkq e%dH(RzQ "b@8;[BDH9գXF"#Ih'"c`{E!הb"h[),VV{uɭYEZaк~K&2zyye%&z`*g/+xLÑ(%|ڷ >vMQS2<6}5S:wn2}ʑ2x37}܀U%o<;S7n0s">>G |DGmen(ܔ,a:7mZLt OVH6Ku6ƲSؕk.Y+f"]nɅ)S,( ƀ9c7dq*H X1zrp.]l6^򎝆݋@w{U."-1ߝMENm*AX D {n6?,h8+JWURim:lEIRZuMLB*74{6ln+fZ`3ʬt\.JFdFgXrJݪdf)s yn+D`n?c^T=j|#/71:q+_FVXYkҧnK읟%/ԑͮL|Ƴ+u/6sOz>f}|nY{H$}݇:uh $:^\6wp_=`xpxksOުv "EE$dR( 0|6_.eiy<t'rsƢn[ We-py’*/0nWU TOP gZ;r;:|!gҴ CI}F\HF(Tk@1i`Lb5|ueQAB"Li#3qԑ`ߐJO! P D̦peFu[rU>Mh/Rqb2#U?4F2t6֬BH qBDjeY k?z(j ulMaۏ8C 2kЩUFK؞E$MYx8JNְnV\B( PBNy푄.3c2 r)V|>].`) }DEb:; 94*6P`]|ʆ0|ɱ{u'(!*}ܞsй7##GS'!p۵ ͌?WFLgcLU` 3W SIhPOyvux:J7w.mΏ[MC:{6$ެ -sMѤA6VIL]fұ*3<+f^ʔ;7$&ܦ*j5+ȒM$CzR3bb 'ci"bZ2E=VF>5r\年Z\Ϭ#1 ,jM'Bu?./o*oA8(#'N9+>9V&0 ĸ1V 'j_nȌ- *d]ЅqQq'JJ~r:\E[wϴR"/@*O@2-y8Pj8 ryU Ռ3DA Q ( &'Bw%|4֜ 9^9vCN=c'SCs*>1ͣXS7I&n"ha\a7?wGu7 ){I% -JPK$6%?|?/fy^&(`^JAq..fϓ3{A=tjʆwac4pܿ+TRFϿCh1ˬ\6/ׁheE:5fSe4UJڢC}+mU+ SJLҤMgVXØ6D>\y8'9pI4,u9S_mET%]t|~ZN"sX_%z ȕ*VRhb pY@˜RGpNL@42ٲs"h1:6E%[XXXXucjV;[k"2 UvauU7&# r74%1́$أgАH{+K/^ Xx,\BN7DN(%A@5Ũ(|=/sJ6ن^-6tw#y y})DtWi]?]QCTW/@W_Q7OLx;VݣV MIKާᠰOlH)uOp'r̠k'BiHC`iZ',QP*).fłA} gCq>`)xO^f".6fam|9Ƀd$ZlFKM! ϑcg# >Y " /QPDŽ"x.S-O\ZICTܔcj)x =ِ+M"+-)H$i,Citb& RQH,$9rq o϶lm]N6jB+';@+AH&@ow:yR߁>2rȝ &mحe>l[IBX]<&|}˫c/yma^^Hҷ/'Ofq_.Uq՘X&:*w܈)EJ kWM킦r3 rrcqf*[io'78.*8X$fwɵ];:i%̕=HZ6ݞ\Yي]|<U7^ahI%{K8Qbv=bkVI.|Y+8dPjEjzxyb>AN3C >m"7c)6M=6M"%CӚ;rJ:Mto!Ymn)oLn%uZG;/_{. b`yXl9q5OM+XXؓsf╢MƲ[iA,D4J!| rVXcHk PrW,| ,g54T1OX⭖ 6ۆUDi5Ui>y>y=PH&"crd1s"0kF֜\S+ȣ,iD9мJcX3JӶ9\G[ Id3VPt(hMAbd[KV&=Gq_%7y,>+$kkTԄ G7J`@cx&3ѠˎKrv)Ls;E'\ oS#[ka@spKamPڻD?R:HI .}hu $A%pp< 32ƍ\ggr6xu 9I +ABzl! #Ni,,3ZIFo,;qŘeFPpp@Yt=V LٚZ+{#97h_jiSlkrŠ40z1h(?ąRklX#ѐ^XSm &6*{\m@)V+oCTYXVH9."{l[Cm\Vfo%ԶӐ 8Ted#5e"7hȹǛy[zhh4iW -G}<7lM45i=xk-V{didW0w$P\-ӹ@/miC/:@liJ?=\>f+ER&k5ݤzG7{]O_P妭ȔܲB4D gC\CiڻvX}%Z#F20S)Bme ILnjASYܢ^|<5 DK`Cڔ @L,C(ΆYfEyK\5lЀ5K[g̶NesOh];ΕԵ6+OqiWӦ3| 77!oY7ZaJqIg|+8JkNkξ_k3Vu凍j!}|!P6UfhiO?O4?58fȑMG =@>ח_FS Wf{wŒ?i1dzayf:->v~͇hњW<7a_wqCW`OY]gb-{n?8nf/g .EڇZ˲Wxw엣VXmiܡL_sp~g8.n-0m' f!Hk`-)I[A`$ý^M ۴΢["c'X͜hh͙J0N̂myVN%nwB c+V_-fy,*}Br2cxo8*7y|3߮iV-1uc=dZrDhiīaHod@YWk3n~HvI"U) j_I_0KK:U;6}'=/?lu5=͟Mjq4gE~]&I MEZ\$be'՘pR'/ܖM$ӣ.ezdD29t!ǟq*69WLbEc%leolQˮ kqĐKp xabv2؋W NFG&[hi40JT!$Gg`4A\i糚_ fYILo0gMd yF C~' *Q:r $ o[{;gp΋Db xr}Y#Y4j O!\gflou/:}WwR:`+X$ʄyh"eSb T|0?MqNsmoszqX3Lyq"̉bA0 Sĺ@+?:%93`uyS<ɸOQ(n׉A0L Eo}vfah|m~š1DG9JN$QtDލMi $J*`XD+wVV(Œ J[1n3v2vuF-4)S ,hҞjYB& sm^7 b-|Ot ‚Z!꼠!N`T!CȥXB{g)JP`e:?l:ku}9[޲$wqԐsvDugQb+[= ʁfQ5c8/l;=jl[\In)H”.+qbU|6diΉEǤƻ=;oc5nQt d4G_rԇcY+qLE~LZӆv^kwpppi&QqnXepQqqӇ]+eK>>+vA]3]~p βy7sTvZDipLqTr\ `hJ* sU@XV1jHݷXj.Cmr%<>v'brͬsÆXf"L,f ,aAJè4S6PK`+Ƨȣ>B`aˠx6ENWKӵڣ18k+0G'GZG Pxz,ɓ9̋Ar@]R(ZP1qo~ZkQBDk 1 HbɎKdZ|I=ʫQ+4nڝѪx+.\XU.f"Z *ѻ TS,Zs{)j4mX6P^jJ#\ o?0Bhe*$5bC;q+ac DRuir_ S"LulKF /9|I ٝg1MK"/8I|؈ '\Eۧ"%??CIxP:x'hڃn4Pql1Ubqә?xF ԔOCUGXq/cqZzyY+L5 ٕs!},Ik3p>hq41+K&Nw a 놂d=炋;InpUP ..ޕvBХS)_0];#(d3(όu,f;ؚ%ْ I$&On{ o2S8!%c+HK;;u;w; HXޜ};+.COegkwRRA&؞FN$A Ȼ);%@B`"R&0C( .7JJ$2""V{$ {3ZnTNc-Z(R ]ы谁u, 'IjBY)s1DRv*9 aa#;jmtua@粭 ɧ\*UTҦys0dGMAZWzoqVvk\hHˆX Ow)gRFeTHM5Lh̡`b\vzSn vZr[I#-lNʛ$Vf LrBoJX^N$Oyc'[4L؛DnQY0 a. : GHV;']׬D1(V=ɸ?dPjWawG;Dn8bxJ6BiM [@w8l7GF͖/`Ƣ7ShoFPiXcb0DYy\`!Ӝ,3A!` KeP[>ZBJO;p M૎yr3ZfRA!$kIuPNPLN\wlxFEKFAi!k3|ϴFyf#bڵR\n24")dƍBZlNb< >Ay0{19@ DfbΌVzO5' *ֆoa4"K]*e [1g\*翮`)kZ Q&nP!+:;/$ևn|e: _U %3Lqp1Fqu &_/,>݋Gb2 mïs4.F&FI.E͒#+d\_AtқZE&^|7y7'_^/^18LƓ4sIJ ~Og>ӲOAMo6vazտ} нWÇ0֦?w?En`Uϋncgcؠe)"n_M(^9|_^EG&kkLò{tx `17-s3ޡ )[R$ ࣙDME*6C$AWYDW}K>YM#XZ!L{a8W4??ÇQ* 7ş i;#ŵ?+,^|W[ar][s7+,=쮷4Ue>8r:N0Ff"yHJ? E)r.Z>th4`:;П1:8Y8Kw/=z\þ/{n?`H%_s侸N ybGs_fW\)౫dB,| 6}z`Ϲٵ& coNgylNһѠ=|Q_~8gigXs5Vq^̸͇s.gyM~yn8ҚL^z;dzlS_FQr/^2O,`0TAj֫z>N:{hF_yI ;8v%9g$㋙s۝s?|y?%θ s{wnYZzo4)Jдm 97)}?<${abѕ3|Z*dJc܂!(}1 d$Z^ι}i3vGJsKHh |,/p};IaY$qǣt(vįRxHVQ¥Gǽfĭ+S;ryYJGܭSeT8DvW-B0NEhEI6]kR.-ޥ4{t֬@2ң] *I?©6Q~3APjhBh479`I<#bPU3s@`!ƍ"FH)1k13'NQDa Ek)I.ќk1@?hE=g3?q=]n\-!h:GȠxbH#a?AZ@kCz}& ?wAmmnR[ 76ZB ^p I7ˬ@{pCW/Vg\mV9aa?9(np}.ـ^ojp,(riݲԈB27=8 JY״m5XجiyS&`MNɹ>gko؅UR3X8][~.5tsfY?ԡff/ bz`؛?Ҿ4KcSbϳ/Nӯ)Z49x{D45;r`X}o6# ;[-mTѳYnE#BbuU V \L.105rNcA b%})A`߷F/o /!q6CPG}Vbɋ芇m6CKυSq}UWJg =>]- /ÂbpEeŁo މ-JqZBO4ֲ`pئN{/͑WR_Ԟ :*8"{q.7QU&.BrJ9=x{vܴ1 nܞ!7ICYqNlR`U~k֝'ױ;"4ǔ!Z:ޘlmyDtu 0Z!QS:% m;*hG5 4Pg!fz1-@-K&9޹B;!dm9bhW "dz|C .׊+$%ƒsz;ӌK^JP~_Nӓ#σ ӣi;lK:!NfGl#+[ԩ>.>-aӂsFokL,ʘewT&zT C*WI憓QS=gR & .b'|zQ\ #i#(UD1&QAB|fI@m'"UD} y ~KZ)npMxh/l?7+ ;nYuz-jP xoҧ\kv 8JPCtQ!]Cu]sЩ:ԴN6Uʡ[-P-_]ylM6Ə~ڮǻVѯ TQV}s(\ՠ +%̄-D[ mᮁ@UEB@ I *3p%fpb$<ϤL DVuũr/?/]N=Md |8ntL DJ8 $Lcsm6joM4Xv(1Rq80! '1Di̒DQ^zArL!QkfrXp b !OH,:1S)Yu8K$(ihŒILxth$ H_=`̟oLKDx lk+{ov|8"1 ]!!ՉxĐ վÚjV)U.x{h2G.L<Sd(Eqqp㌋ 5FMNtNCZ8//Npk}T l rë2UBf{,CE+6ޟ B"7HV~A0nFcdmGrMQ&ֆ[Aozl!]ߊ&qvW}rkľݶS+sW<`|ey"wHvNZꌒ370ApOzחaӦ>:e-=V09CuV)$oӍY\R-6}1Eq c'@ D- R1vQk@ jJ8ٻ6dW}Yba+ͮ8~ ~ZHxzHQ#>΃CA䰧TwuUwiq0vk4!8RBޔ}?Mԑj~D )v?a^v#x<%?qއ >8tF]!lxsf { 3Q7wֆ,"3t<#HJ8U3ɣ;IƄ+ .qb 5 ދF|@`f~ju]_d7s~Gy0'#?0\) צ0xfY^8|4 i*Þa c JfnMws5$XhN-DCphRcTS$xD08 b -`!wIO“:Dr,9,L`!HN`1a^ED T1TP!u AN%=XVbkvIʦ9O**ҝ#j6ɛv&?zյ><6qAs,$?+<0}67}6ko.""C8+^ ? |tFsv3gsco>zߌ`}?IM&W"U'zɐtZ95-@Kɮ|ZV$_ߧ? ̕}qovNBQ(s>zsT9i!_5 ݍUG֭.eNwTnt}$rҝu~XҺ51B!ktUܷj`'rB)pl!ޥ[vi8ۇ! 1p!zB2-l+D/yvXP+i.{#)1p33!HpFz7x2k%4{O+9hs&`83EPKAמ 2a0{"a |?!9B^v,7=\"6n=>}_aQw1WzΣ8BVy\5GqCSg{D< 5C e_m(@kՐ渻rϊ'>C.i;b5; [8\tŹKŹKhηF*^ٞ}DRAB-ajvam=p%kVæ< D$(ҖljY`FL [7*<?Mv+'{9\s;" !jI4&k]"RdVeF$X~"4R%j#i R$(B#E 5#&)!?ʔ(ծ)Qԁo! vK/ܳeu7UЖL 1bVt/ʣ@Dnv+ݭ f'aP%!/ׄd_`-4>M O]=QF yqVAj!S>ZO6ηH}Sm:~݂a?NPFEΩygw-RO i6ި#Z(pA@uŗiǓ/EmAO;ܗkVJ\"A V nS~KOs7m/mڭɞ={sRcԋʯ郫#Z/nmFԾ!*"Jb|YOa2_R:O̺ۚn|w;AULu=uKh;1kGw8>*8XC.5t6mVC뭿 Qt סEΕ&;d~8)yǮ1FEv5u]Pi騬X8[@C-)(lEjkQ8TWGuRG/XI*Bj[d̈0nsi}؊6G20'w),(A6S\U )հNie5`4SI~Ѱ N _CJNh'uBK\@]f.5Vlw̓t%"7e''@AkTw7;rRz; yb[kBV[+Zzx[Y_ YP8ltc&ûQ).-ڠpݸGlP#) HwFѩji4olX+Ycb^8+*V?7RSZAC4NwG\rgt ,FS;5Him0[UIJ7CTSP"g4omKpVvcV{/0es9*"3@cNKNziI*\?[4;&~> lB(gۿ[WiK*mi_-4LIl 'Bkc=Z*FȬkZ\Zx!C6j)y9B*u %-RN> `Hbt4"3d @QfŒu eZߘݨE >TU‘X{RRXJΌYǠ P|b洕RŶPCՒ Jԅjʼn}NQ^f(AuGl.'wgtp}M^Җ?92[1Wmג @& Wߟf1+ %Nvc mohC*;8okn$m9q]ür1Mo_)dj=Xkq #..#@2 Ion'fyb\kk^pfL39-yƬFG^E+߾y#|0Bxi:NSv 2Geq!c]@1F{# LMt s7!/$[oTjC|#$4EzLEL9)s<^ViFU!P_}A7w_ aVRLvƨ;-Soa򐆱eĭ{2&,X2&(t,6jf% ;6*#jDg ѣW!0d,,!wyU~ɲգWFBe6l Ma67==;U>)-8nEa >jU,͸K?XH['j]trHgP'OV8Ÿ3ezq!U+'_h# 3?5!M DYB(_ Q*q)α ARRG94 Ő`c:K8ZxQKMiW r (N-`g2`92kp4^#A&~K}8{7Ձ^N'zN[V]=A. Y&Pt̻V‚)4e` D;y"vz ,UV: Vcn$FurFrȀjN\EU`dIQ^@yQW#ɞSn嚻Jxܗ$9 QZ>̽:kZ4() gHyGln~ "2e|?;l2-e`v_J@iBƏ$; xz?}温 1 Qi;JᾣpW" 5Z]% zUQŸ@5pxz(2Wj A+ 8ɶIuNm0=]|.Ϩsx/f+P.8 Gjhn o'a0Nsϗ NO,]|qvgӷ׏ikW)m*]bZk[duΧQJ٘AGބ>d]p w~YU/99.@k*$M:(9sVS9R٧ynϚߠM.@IN2ۄ'40gާ$eF2f0L66HBCT_ܵhv!az38[۳~vw%\fwӷo`P9bfzmfߘ{p!(>(|ƐObkY#0QEH>~tKy072vÈkQ>hy6Br&y9MS>QqH?{WǑ `+0] 03FG<$% odwjfY] K ]]EddDfė@ 4K<1 $IA3ˢ4Q/,k9vC -⅝.ş1ܓKˋ]Tnqa?#4*$qލxB% ,n(\D)h[Ɠ7Q 4wQF^i.&b 6zŽі|k:KKSsQ-"o&&I+eR E2Fjؔ,X4PVܟ#d Kt>}?΢J`VRp\J0,p}2hFBBCRQ}B`2pj)/bOgףe͊]AM\4Fe\:bCLCE%g;abbDrOycЄǸ bPѪE$`D%pޥXwo"62bshMN!K5mzŹF:f gZV'!CI󖺃]))bLr46*F&D9ދaڨfP9gD-7m4bЮ(9l3s|l]']j\59mNu]6Lpvz3 {޵{BV,_< Nm?]Yί$!VXy/j >>kW "z43׭ՄCS:)3TUði)j$:lcȺhbX$FqG^Y.6 ?$Cg'.͌JXZ֑X۞ǿ/8ؠ.&Fw~ϿTbj;Xe3F*:[?\x9G= /˪eY$;߿'{ʽ>M~af} ///9j:{9^w 9IAR+ \3" _C c:'8%=Sf*,BĄ@ CT*90a&Kpe(9^$r RPM̈́hū RD1Wlk]VV&(\" 6 '2m7#ܪ;cS@fz!&-3B皊qh1ʪ`7BFMc(q622ϬR"2nřߢ9 5*YDwq pG#?rEg@wF8%׮h/$&,bJ׾b2-9vЖNnL?$!x9=g eI`1x 5&~)zmttn J%l :ϛ*uXR0U0YrIF` l8aaX$N%"K-@tD23K:T G|j<&AΣVH/aT9 (V8ч*w֜ @{iJ>c4.Qy\yAT @uDϯ8cnV n&q뻷qMxi\pLg5|saSG`Ó'IRy1\I@ ~c5koł#,\ddDp*=PL5oC04?1\ 4Fj˂jВc(Tg8F0a$hzyVq|TbM ){x[oʌKʠ 5b{mR( +4QڴfVTPl\(3RI/W8CgcbİX솽|=G*$;DD]0N&'{cvn i57.}n>^q.Eu[o__a=6{MZ+G=,34F3sG tƒwQ9qEMb UPzD#')(=UG֊o'p(?<YKJ㔳3TFb3RPn`Rkܣn֩;~Xtws-ȳ,?eQ?Ύ>^Ah,&s4O^?I2׿7wabr4ϣm~에͊:A!0q=~̢kEST6ֳ=N 3÷8_TmB~&:ʦjK n0vлŠt>w;]:5 ^mz),wnmJCںZyԅVkkbۧe[KN9P9u0%7 A 饖bKB)0nHcTG ytA8$+cY)9 ^nM4,JE;-p #<%;LXɴYp*hk9Ɖit7\f:IDTJ 2ATX xef~'|R/0 dH30d_^†|LN< 1w>}Uޮ۠^^[DJfwyvMTŸV[7+mco5t6\q~݅߾2%L| ci,BCJsR̒U_9g^IU3X9izz5=~ O9Kwꎊ {h4_'V_}뭍}-ҦP#R KT4Xr'Ih͋Hŕ@(LV|y`|+:}gG4b/CvB> IwMvgMl=3ðч0L&,tO! עE(P*nl>xiJ8Ջ$0SOw_jZ(5>{S1`9GDVXLJתGk,IWmwb(A͢6^H h2P~ô>rɘR]r1xxU/ts%(T$̻q8+3u^dy,z{VNDSw3 y>~.*G;U7n\l)3Yf8N [Ⅎ-qB] Kx~G. .:zz1%m^tj 5ܮN)7Sm]bLD)CDz9Rbگ߫k:=ެALUp N݄+IXLt6;tv"SPj]D12Xͬ)&9oxgqs s[0(&ْ٩ gہs+OclAN\:[dY2EDMW\>;YU1sZSMop%ӣ!2 88<ZĴRHafGn4bObPb:cǻ0BB6MtMij7}#n*CqS@Ąl<ǹ~#gJ'޼gy1*=%ā2:3Q!#m8 S%^xﳙ-]J."FX2j啷#w$Iƨ+rb4]@Kp` fR0%Yr0%^hz2OLHWh^k棭 slL6F ~>9e͇j^v  h4 D($*J)(.lODzkypWYu<;szA.h^蟉kX{x\T /׎ޚڃEbjQ=֏D.9eb[tQ~`umjYV-V-Xu U(U<c?>I06N:dJPA5>iKH[ŢI@Jsam42 )Y`LB;壤6Y{ǥ-at r.i!_;lG??{WƑ /;qSuÄ$OLZZ_4Җ[]n",̪wڕdRB?ܩ)8Uŋt+ Yq/9m8M$AD[Z`Ҋ%e@*f;5^* f ;wrq6m{o?х>oA_W5@E-stkN7@GFˋ,$ȤWVv;+m]r1!7)i]yDTBB5W.j_CKypoO@b4'IOnz2 peM_`J0BH9 H~ \=CFuT{H hV|'D&r*Sv1 !'U f۸? "? S>ˏ>"HSEƋ7+.-"4v9 tC8$?|x7z'E_7/qx<tv\=sJ(x Y  լU*i),.ۜu+Iԗ3&w"=K! 8%RH5uRHcebgr\=N$au@Koy Lz}uՉ"DR쪰Z.Uį) k ĸ%v)k o-A}vd6a^mԚz%;Ȳ2ݰG7~}3$lUcOA}R6q}}VKP7nZǏ$BL. kB :\qB**2SzׁIW55ׇx|@Ԁ1]N}Yh7:>"ƛHM&xSq`FZw^* 6Y#cN Y )TX 2a81>.j/}Uo^T<KyO?}I DpRV7i.^4A:P"Y+VF#6l`Hl߉^ r۴g u/]r;D%2翙R}#fl}_ZkE-K"#ٍ-Mf R|Q~-Ov1Dms .N?ާF! ap+! [ 's fc# 9$"; eC٧} ?IMM·8>xgI}@3`=z"~V%se#GU<rXÛgiXeo~_RS@8nY  n`.`N!d)#U֚6Tמ?WCojЛ'a翄 w7&IZ ?kyYŽ7Brol;dJ>b̈XBx ljuٰb14'Ynb f,3v UՑ J6ˤn7,#qiTD&z7szմ@FJ J+6 #QAH!o;PXWłi9/J}NbH ZW+%xhr/}Y5a?݅iM~YAwM K}ϽQ텕^Xu A9v|)(ow"^SLww/p=Řa:"&a?4C=>B7llR~kHҖSאoRu[!&t Ad$m}s׋T߶: I@W{[im5SL"N8. h!ޫA!`WNL-5/::*&V[W Gwbtף5p`tw:VZ}nwmCc7=eTB6aa: J:loۓTO5F!8 \`8nwg ?, w;Vkk6Rd[+}!wL(<+O-Z8Y7~1zFetK5(5uS!b4Vk.Wb6尮1AYv($gJb杴2P"EV9Q~Xv8ņAVc;bgDIrJYrV2Ln_|B/HICl)'*dFD/$@iX2ȳ aܳinJԌԑpwkl 2< 9hB`!TL kEP*+]$_Ҝ/0PwZON(' ^ I O['N# 4 O05Jf̄Hv>,qa0!)xi%+fJwdϚT|icg-$oPm6~R \l5G$&.%c{._P L>7p+m ͗ip??eG^OqnهَP)}s״C(ߣQ,m&~+ ﺻ@Z ^i9LC2]_¢Ugj2n>Sf4G(:p O 1' r1K >C饐!!XV22Qs]y,K:j PX}jh>p;i AEOs Ц_3&DZCjq@HX\<fT CoTP*`ly^+V8rf3qs^"1 &q)z\<jOW/&}|%pYx5 9&y1"fRu~GCEEJi4٧Yi<"éi31JsèPv S{*ldqC&rdPb/r󳃺f ӕS`zz d4"ei6`t[nr P\zgh/fvO8n{5~ܠ mF!~6l *Wqn ItnGnR~֟L?V9x,ٍQu܎zKQOtl)g=Vm[7!\EtHX&`r1HQwԱnt'mhݚАWdG.mqo %?y4,@e̋܎PQa}GFOXKu(dеvMZ)<܍͌V1ikí(PtsҍHoAĘ:,A9iFz[x6HPONM 79j+-ƿܡAޯ(*:M#y Ӄ4LqF .ˌ!sw/gqɤ;+odkґ_;J~lt)2̍l.* TJ1̋zCύ1 PY߂*/5RHL;wC9DGRdBu o I3Z~\[-!c6Hk21mFT Zz2Մd ~׊dHQd:ol~Ѧ'8b0yXA! uTKw<%bnC򪵭ؒZDIl\R7kx*# p ٜ~tS8c݉RJ4.ɝR? )ra '%!6|(."[QE-1nف?'}vr&^:B6|C4Ӗ{lH jXc?P@@ď+dJ7|hűna(C!^RY`¸5ŋ+BBI g|@BR* $0-\ofВl@*%5Z( X ,!(THEl;};bЩl'[UѸhX0(R?xNۧͤ^!s C9ZQHKPADRP;B";F ,P -4O0 J DŒ98+B@!u)`c5ݡ,1KD;Iҝlu%.D;+FwG 4\k\{i>]fw;Vkk^pL"J- @ ?Gc5(mكkqWo G KϤ2x#6 2zwUY(Hjy|b_3.EɃgM;#5 /*QR6 0O`HQ׃}UՖD bs;;* 3p_W%.\@h\bב<\~Axٛ~3ש(;Anƴlq)`dL 0IcKέ+\qNIԧj6 GeDC#ʹZ,q'kés@ ;J`٠St 3J_'v<, /ʴOfL4M`'r #'޸VIpwSZ?uH"ߥ/Jx:BhIV,Y4uۥh ~Q Fct%()ޟtM׺lkVVJ&arqENc`}.E&.St`".H2Q~@1a#%'R(g6 b6 !&R"A! 2G彎voTfbҦQYmK⌬Zt^ )Uhۊ|H\4RD95Vh:DsLd\@2\P>*^uwO']Pk6Q(|#` n iCHqS%8ִP~~41g@Sv )z&f)|(b;lIw *8朩2%`&];wVB$IDĘba ȷ"@+2vXj$Jg}6, eGSPcj=V0e^(dPR6m3ٞZny};ua JcbB5倍}Uq5%rq_(%{tq%:ɊJ6g_վY9~rmLH=s9LCbAcKi)v+!5m4ѧ34:-\8t50yA yYݮlObq.=u;08߅P"u澙MH> Mq'$bv %b'")k\ОnB:MSҳ: ~O@7(۞x;6&͞@ BC'}ݝneT9øb e7}ꄳnc>MM9$?H&.Ǵu_9q2랹KagȸemY?0 B7x^uı!)מUy$]i" aX,-L?vt<~+Җj5NVSǃ9Oj=6Kd%=_ĝzcr0yw T5<I>ٟ~?ߟ΀+xר܈nv93]yW HO~ { kX7B xW}$+?xJϳG ޝx}9}i+oޞqӿޝ~+;#%a&\%xҵxq|mmiڏvYwn^׽׽Ax߻r>O|wA"\^;G.+٠Ӊ~loIsp 8x<{$OWw y/lVx8/Yf^z[Pq.qo#LK(FZO?k4ܗHpGdYD1 U7PG˂J.c=z݋1{;b0{͈7z&3MUHQU 5^v秥_#?{Z??A}_l$6NG'MR߽A8)>rQop~ T;LuW9q}_Zw8(QĽo5|y"Dt^.R:SZA.i '|{,TGJzIp*-:\U }gwL+SwИ8Nt?\V|308(sԊ/{9@NPYg |h/1MK,PbQ,}D/ 9jZ"&+Vb.b_>k/HqÂ{QxylFBjY .> Y@SY*69W:ݜ{{GNG-]{p}.Z5'-,%ZI!t1< G01fr/s/% #؁xdp2qʙy]DF&;XvͧU+d_#̚6v"V'a g*@,rd5A&ԤLՂEJR(G*Dv^tҕиblNt9tYE-ah*'/E]0}!kOuXrώ #iX{Pi{n`pܜU}[Ҍ25 q\`(Ažjԩ_ /~H@hh.hsnv )+s3r.kp^dקik;~ .{=+{vMDhaǘ PwÔ"̨G7# ,^kG0cIqa,  "[X! oHbw,mB9 yŭkq%(uE>N4Lͺ)#"O&0z~\?h5̝?g 5g+SxNMnBΫ%fdZby7F(tu'Qj48sS6!*cm|]ll~|ev*o4E%]8UoY qo>qol4O7#[vٕ21yM fBtPc[re]+gSPLgY3aXfǭW28nO/z2p}Ԛ[/L2څeTUS(ޕCk:]BK`3N.M~:"7'I8Eo)#dDNbBlsRş<3:`K?ŭxb#@xA<$9g20&< M*Ѽ6OP䦇EZE(ZPqD 2"#$'Lָ!83;SdyFц:w2[ 1)pv$i5TnXKB'>ߎ@hWQRZI%< =ك>& <zMEkon^hMFi"7^buqv'״hkԐ5L^YSo"O)U]8x: e;10Ozޱ0 Tl]LF>q R^Mbe RM9͏BtyTlngiaag*_j1# v~0B׷fy~l:ղsCધ+fka2 ʁϣf ;Rr EIYշEkjP7s཮MUk^JyyU~D,J?è殌At͒}yi"s^Yh)R'vDPq:վGWؿ]"i=Hiرn۴_tYse~I^2M  ۢfBZ)kNBD/0Sw"\Q(‘ *b_=pq o.^J\LhP A#D >cFREĘba ȷOOf jN!$7ϋ^-'<17,whTn@d Qiv,hO\0HQ*l`` (@ַ˭@+vlv cZtGuX)/-c&:2L>ˢOA(Pi}A`0.ʲ!XR) i jnJ1Y졉hAFCbBz,2S) +V Ab )V`a0P#WmĨ7BZs#,*ij kmI]koǒ+>H~76 ) cYRӤCPHp,l֜:U]*!$[ClŅ1$A:A"ps%0 EgRu%.|WP|gEK\rtHs[f7:◳Qfg`PfI![\f?v!d+ynrjfWˋiY/(Ӈ+y*=x5sw:[;^RXe))_]BADu+<r;x))[\'dn}ݧ0Z`7fVw"){ay(j!֑}.N:c@,]6Wܛny"V6"2u_+(lLo-7`1IWAԉrB#K礕G!6H> W B06ӢtCc j2-JAJYm4R£zUA099dƛ`a@SiIvdL58BMx6Қ˄8C N@rpMD +@Y}$>3A45RM(A%QNsV%Oju&,9߉|<@R.rEH1 j9B*gnO&\4ʘ['BIjp!SECpLvA/aG>͑CZ)x* ɕ W̙I¥S6~^v,Nb{hb ؆Xf*nfw҇䶘;-,+p\Q#rlJra g˃SUzgç.ySTϿG../翝}hVԢSkL;FQS(a7U٨nn0MZG?L'}"'V.K~w d?_Y E'w*Y=-γ84;Xeu'Qݱ֏& Ѝn5IZg/?A2aj)jW["Z h UF}픦|g)c.O/G &ibsS^H%V&b LVe֫|GVVٚYL{u'Fw51񣎆 sZ#19ڍ΂{43%A{e~үJ_蚷Ǵ*r{NZ{)B:i`Wν^Gt|~¬(~T#M'T\)Iڛ0L.#sFVP1MV c)u#:mGRQ$&uc*GS[uankT3[2\[ŀ5#{pjZ<|ki,NV1/ 1yvd R'߼;OKwI9nEɗl->;y;KW6t*~y݃Zx$ٸy܊X٪wZ>,ZX_#S/A2q-[`S6K~p?]\=ƾZja͸ߧ/_xK:2+b,_ȒA}YՐ9_[=W"L %`JK5D ^J;66k)^o LW?jq Ɋ/[P˯x{Hn^+j;ҾU' z/&f*o 6L*6v*NPΣ}Q|so*p!yr76pjmGҗķԜ/k-Z'즥;?r#bL!KIPO4g9MF*wT je9%+eL̳D*s1'c]$D$΂P)rJ'B )g2q zNuXY% \wE%r]c٤#*YPRzI7K*SFBJ0Je]C} afUa_sYiLψ=%$]{֔›W*|O7#SKE޲onpP*׀?#LUVo~Y/.c0rR=#w3r?IϜ_ /ȶRgJDO>r+9'Lv6(ewTkEgaEax1/ܺ0îuעJTbC&0,"Xd"% 9K7&Bdb6g+e?#pA̗c:& 9ݐf^wA0;sKf/W̌bI"V*00Zg|1g@C3$f>Fi`p051ArUFۣM ݃ l4JCܽ--Z|@FOӁrK"̖sfY;c{X^>%mѷ+5[Ն(j2F!C}+{V%">M +>_\^L8}H9r|p-Nʓu<|[4uỌ0--2bC$#6& Dd.KMiJɊrǭ5ԫ8弣VycfyZJV󜒀[1uћڔaQIIZ|ߐwdf!fOYNoqa11~^=Ա"0ϋcA%XȜIgDDQ$f="pf* wHΆ# BY2|V P KZI"AYF}*ǀNAS8D 9":'_ҬВ`@06\a!e8CC>څ [>hܸNCDa4ODܱcRp&j`T-.>FRE QUAWC$VڈG/U#$ǨNex% `@i.ʔl.2-TnQVˇ̈i\k>jSq$â+D6rPj 4InR"TR !hL%U9*H%W0 C)HCR$h%ch5\@"P; HM rE3W B5W{!`G1@>'KTQOROŻ6V"nWɯ-ܴGacmkx墬E12R<†V"RG<YRQr&@A@#4A Wp16{vtQe&(+EV}ԫwT (S Hc닲hLUP.|8ǽ҄IVɂ)(܁R1;ҽ1k .0Y;_>no%@*j KP^#{xe@,eN>յfMR9D[&jNLJšMCCRm,0;K;)R#YXҾfw oˠ^hf5]fvli:S?וN;ЌcsVZ"Zd!+k14Q6\MomPOdp<{7yt!JiwdF&#؛Z KX̛7+k},Np T x[89O #+)()joadjLC"Zxc۠eM$bӜ?{j~d~'/2k̟!Z!¡w֢5"M`=쭴.EkʂC5Z_{mL_ nn]{oW="n7o|;lK`bv=_ja?p !?A=`S'<%-.}:n[Tc6{8itG_y^6?X-yGRm//9uE/ boGYvIq$ԥ"A!j)~?`-}T|kAnMR~_mrUF+\FI6MR1a/u6rA3KNH$:UVpV SInoz%_Gwv|uݴBW* S/ X%v9kQ+}7 wp5Rh* 6k}Uz=12hj[}6)_!G]sGjyrtzڿqNgp1k fg),oLxH=XPZ*HTgN1V)dR bSe7{Wƍ/9e{p{ u90${F44:WliIaIJ{^ZSE*>f3pm(Bt%C$Y; 10K{vƤY4Ҹ@#p"LfPwtW ZkIufa4ZskX&Q!*+1wV1JʼUY }ZZ…o7BO~rYlY#t;,M}|?)ciiryGAF1-^SN9zYNYYf2 ܘ~9R,4ˈ"dšJ rZˡ"s?GlLmheKgOPR-oЂhUl| ApѶQʻuW˨>4:fP@L.''yB~ڽO+oyr8|xJ' %+#tfji~C/7sF]}-⑻ q=0Q&tl*_}3ٴ_3'k40i^{甴hJ"DJ.NfzTnb>/\,YY_Ï5p.8;)ui)Ut4z}Jy(DGǛR5<ֻ?dG{~p!$#dA6x͇d[Ƌ}ϛ.oԅcmnvSN`;|34'R?'N`#tyj%i } Yr)v"ki*A>i&R)qO Dl{`CP&`0|2%bFء2@1\f3TailRqvTa `8^~K(C zzB Xx_IM㊅~)B_/=$s3LVfww(q?nw]Oa nG ?ȓ_u0#1˂—"&`vm;̾}h,2{ᆰvm(oTi)ь~,2$Ź\+Z?կb1ZϏJ:E,a|Dfp@fyG6)#}T^8bFÍlMjbhX~%O0I0t-ads`~L u)oj5)G g\P\ 7,bPm%6p4? )x䜃';f#k#19BMn`62.m*A@)y70[Oծ5o7;^ׇR3o6Оio唘ZӉ1ROcq_Ę ts:VR\UOs5 K[>Vep׫wǙp-zs3V]g_v{*/3]̬}0r\&߇l}8[|_<[ܴ-joB^N&a^MR|r jZ~k7= .gYg 3YKBzg+FPREֳcj )8NTRKT\N8:bTx{PQiؤ]UL)UQ  @W2Sd ϔ1ӊX*Ȑc)#jJhQ|@h:ucwM xGM/`ins i\oEM`fQml @JxX΋*/A oJ"5_g桘Wc 8DH:>D(E`Ii- g].ȩ.1!4*N)PSꑆzCG$>Q&D0:t(NqKNC GŻf{QL8; 摤Oa/ =< #2j{ܾ4@ h>c0эysh|j;@^ *3 *@9`Ugv3m=SaU~<3sR>Z |9c-$?SB߼~~'BY.J&o@pf^ӴUG0A%C0Q "n/8ds6O[rs|PIL {nyΊSg(D '_#Qsˬ{17SbJS'TJ2;86R0 +9=O$I-V9&<(pN)P**uy1Ps[7ɖVpt0{|.lb 3'N? [W 2b jrOb^̾|nYb뗓+h'|^7&(C_ܯ/߁,EeZ_i/z^#| Uj,lB#72 ?a֞f9{|T60qdӄ%ai&)3:2["B@0k.R/ƙ~ifs~~ 6=xD_oL99:rF|&擙%>>' \o~Q3Ԅ[+OɿP9`e)iʓ> +Buy9v [Ax/S!bvZI;8c(p:=~X r}k?%̫Q`j}H+1nқۅ옱%mb]EZ${䀭?Fsn޲iő?lٴ݈l%JHm6 }hΪϰÏs;Vo[ڄw1-ҁ~#P{2B#V!]1U'5;buk~޴:'7>^q'A\1q_}{3*b{IOЖv6ObWA"UST]e/n% !) y)nQ77^6<@dczٚt"xqIn:ƴPvӱB5Tj ;/fGO I픹M]eWPa(ǥzzzTZˉe%t =5VU?!& IFVAD.F & '=cTk.AT^JBuwPTF`Je)yJ|QnFTWWEF3* O1)3dOBf݃ YkE|IX*1vɽvzTZͮ淰 " UZPETm"ް, 47 0_7 2T_N($Lɴ7O tW8'&F;"l׶&HӝEJ]$RURF["}Fp21YI??He,QY.5jD21cSBQ# ^i"$|b.3gaQiCo{󻯀`U|~O?4n|gDB$" }?<ܬ~}iWS2l͟zwTEo_ 1ٻk}3?۹jSָ*heL_VUpPp*k+]`[eN:RbVO8uYRG,\4 cJeO34`U'v1?8r2CyRkb'G:+tyP~0_41$5RtiC}$c"4H8&p XvP(Mr=WUI uDa9Q v ~iU٧4\݌ +!vRЕ}꩸"T@2!ͷE&WQ1iRqk]wY/'8)s q.wJUux`WƎPeM n>oc?O.j|vЛKu@x_/ߍ6DW PaWs,.T/W&W_imCzq{¦.jp.nҬ:}<e*7L ۞}=/OV ]P=iGQGÁ&O;\4S[Yӛ3c kQ{Mt\Nqc&_S8z[thn^Sy'D3-5*^AalY)5^װn͢KB2FT'R12B*FeN@[ .H\sM}AXtcDy^S Nsz팀5h>( Zm Đ-0Ek|^i3ٖEF !{!\,IVR$BeRQ}tW3#$LK7^dH4HN(i`Z*IVPlEuݲ]ɆP,xbVZGN3*蘒!#ą!1e쀡2vEE(KL@10$| 1$s- R-‚c%J!՝Lnٳ%`*IhI ;e-,ڢ$@#w&cRPSy\Iآ >")e ,zC8Ix# )TZk M3)މ6,Y.%!h,y,"  h@q:bz0TQ5h62PpSNx͵!iSQԡߡwsj%d!Zok;o󕊗O^MC;:?Dl\[x9^~䱭~7Do|כ)W4=]WNa]X0{ofbȶ=nRn— z8LKԲl8$RkՆ+第j>0(%|#֩iװK(=xdXzg .5@#h[^e oޠN)j51D/BнVS| m8-hm燥%Vc!̅lxQx:V#S 610!eW唌tzj@$A37@pzbQ#@jw4е Q F &#iG|+ͭy+#@݁Z"$7 Us7 v#`X>>Hȼ$$@;INm`֢q ; (ႉL;AnO&#r3 pν%"ͻYKeG \^% `:F"v橉^)J. XB-sq3d:r%@$Th+ْ^Ѡ&<@r$U\Q[ |~V{`&?q׷ ǰFO9É0Os4vqjo?;7s> |ાe^%ݞo/FByeG3z=/|_W#VaݷR#"K%YN*̬:DsOt)Zq\R A\T,s\QPtL#@ԙp fqכ޹AW'hqY7%jJ^/PWF(Mv/qhٕC6q^FZΐ|(v QW2 yjbەk[A#t 1y/HFwÖ@ | Ggcl~8~;UPC1ޢdVip!Rk*4h- |Coo~wxմ/\']W O3+P97{';'0*iAUײޙ{j%G.[JPBq%j9zx`3K$P^!=thVz̫pBZյLtg8 ϸ2@ɹ}&S gSGS$m «KHI8i <[.Eg:0Fj`V)a  oeR$yNs._˼3{z{v+_ m/ٗ7ނن3Qy"| O(h㢣hj!h=k`N2vE$[B[ o|Bb^Dst?Kyi(h*ʬ/VW,)5_z7`eoy/= R08s &+g>]b5k63^PeNiAQ[זBL9g)n&W2106'H9Ӟ(;wI FEܠ%7xԙdZ1搔]kU҃{WK=hE͖G e4dk&71~w6}9e~]?Z?9LKwڶw;)xmk%_QB*[͟VopނS+1շ߼cx45zajS3*jOE x2 ї:\ >蘎`RUlzS&]'w7lp=ݤҰ.Y71LxEngf7xBpߌQ8ק4ޣ2!TϿxcn~bg8].~m?#=#$&p ~PY\fpUe]GiN<Q=pwC1^^8m /_y;urwA+ giTP Yxk+-T6m|TjM_m%mh#TQKw H(Zف%"fV:fJ$쏒Eļt>d* a$lohn,n>b"B]/TT+E1'qݦ">S+2@:S4o/3/(d[*dWSr9 S|  & 1Su!Xgj&`ٻ6rdW?kY_ !x 6 r`v`IGH7F0}ݲԺgbwW}Eb)xon ]/L'+ilN烸zio!uҠ`4?.Ҁ`>駻m"9Z|I5'mh3,Uhbp< \T?~)ޟ\ Gs a%+~>XR)Hѹ4t|i#,v<|ek'S境X(&%~oaùEJW5]4ImRlOc<|=Gm!Azh:d6܅6n!!uvr{~ j^hO+ZqS>0BBΈDׇ:k=;Vyqvj9qh댌]]ϭZ$Vi~&Ѡơx2]{2Va5l|dQKs*j NI-B^]/'M n_4Ca1\I,[a62QֹH$ P#qb%Ŏr8(^p-ᜟN^G8a8Q7qv&0]# QEhrd#yόc%mdU Q1װ!Z2ɦEm:}mc^hˌeur6V؄ؘ P"Z!!6FHË;k[$T21[/@|;LpOfnyǓ'9OXuO1͐z;!k^hyc}{~U};-h3䔲 MoPts.tCaF[.ƍ4 49[0e1pDϰZ+$R<#$ox b&ShVVitNpnp# z5zR6X_off2`- Q+Al]82zRfLflbہ7bzJ3$t0TbɄ+rkISsi>&͐䏂zư$S$R<8Ɔ0Ǒ;5f abVS2.낢0c6-\t]k@@N e(.щԾ!5G &TU_A &#hEx,9JN #,'~X ;ƣ݂FF&j8,-,GqbJľruꕽDoGӾ$SRGT}!X&9mX,$)g)PZME$kgdI¸XlM)''Q.A#3fIY=Ex3I)z8K1~/'(d: #m4 s3kIn_7ć7o sBs2\~$#S(E&.(dxwqBw_|xyqd:bݧ*v= RZ^ 4uF,R'LӳEc .tKDW;XC$R-1;dCnɆ5/' @ \,MjC.L%9겅!o/n7(L! fjnz8?1n\ɨhZ@`s̫lhR3 7dn}PNT 2xWJCJ{M単 u/n#&wӹc" ;rH6 nRI#k-RZ$I+&,-amV&3eDžӤ X(r 1*! ._#iX?0cc.,YKVJ| V _D"b F!䤐LlQD8BH*!) ]UA{ȁ,}kP Q {_q7Amu'>IjWraNi|w=4& %{;=X̅t Id!qO ei.ԇfc=xCݧ7Iv[hC*WXDCs⨤ ʝ@\a^pDfbgT?LPʺ"U(oc#2&gehh ˼b{&V6W*aMHQkX`b4~KrAJ !̷V5BZ$qղxk.y-V9cC hn`َΥIwދm}HUvx4"%]|;HwzDћ.v7ɐ|J** nk ?h^VIb1Cr_Q'y<-.خh G0eX84Y,v ._*5K[֩[@:7(խV%"@κQmyKh!uhu:BVdUu_TVc࣎;LB$tThdM9j@jG(8*#砇գz*'A0\ 1QDKqf^!;V G{P)B@xgu:Kʻ+HaIPM"0ojDI+ DVP[ ʊ@ řǧ[ d7W; ѩd͹w>"QB"d qՒ#iNUb}ufKRe74]"OQbm,RPӆSw'U.#+; A46 G슌hMsv̱’%k>#iɣsUhdmu2y{8n} @uӃOa5_˙S ?LIm.}ǽZBTms n#B՞(θ59ʢ,!J2Kx[Būw%As? J1iI^_7 kB`K7x= Րfι4LW7=HQ_n&'# (%F\4^ P!oh!U1e*Fef[-ƍeL)%[Qq39Bʭ}n)218ҁT"xňJ"u¢$NpL%HZ FkˊasI 0Ss:\KQKqqL9=>'Vՙqs]4`5H={˩YE(MU{Ѷ\s[49 VBJ+ϯۨc>#Qw)k}Ѳma*`zk K(/퍶aoYM< E  4j^oU67аB\aVpbmAh7--PLIZ~BдqmqRVAQzoB=ӻر%*'MlԈPQ2!fϾSSrt)#|tr (szeD^ɵ2i@e37G Q,P)Q*6/+UaReEWm|͵ֵd6awՓ4o#w{"d e}%vO4(Eg1/viŶyLX]&(ĩs:I#u\0Nȣ7f*(26 cLڜ3H^LoN&/D+%KHˡz#[vAr?4yS,!^3 ͆Y/Y$4i,|BPyẗ* $bM284I!\[FdqU+r04?~tKmE]@pt8"N^q.`{sspV//AIJعˢqzio!uҜ`4Ug?N8I5l5a1+hf˟ G(A ^)e@!,*~qf~/Nw0߉\t.}W'vwqY#~;q۽yL5(>J0o^ͳ27x mOe5eS\6&Galat<3HIg.oU4BN/GC99\/2ϿLƝdi/\z_@dw#0uΊӫ dfL^^ysW?^ S~Ag:Nw7߿z7o:]w?|xϳ/߾՛3~?EOΟ>֟h|{'aN֯_Lz^ߓM~Nə{^<;3Ӵ?1v{0 ~Re~ѝIgٯW_@-I_Fl\KR 7Ϡv&/g. i|J9X]O-Iz;{w_=h ?M=PcZwvKOO9+s3sVBlwTg~?}ّ&չ%loxؿM#4؇tw{fOF~x/{ƑB?, sf vvqXD?-2I'~CJyM"9lVU]]Տ~aRTey u7~ffƎ/1T_'?~x͏0ݷ0xxrYgπW0*Vo04.Ͽ*r Rzi$|Z,~{y9x+0B^6jV\տ? siڟ;Ϋ%hd=t˿_@~G_rI,}+[#nm[z{Pj o-Tf/seGA:Oqэ>Oō!yz—A^E F!E,Qϖ1H@$ow ^xCTԍ6 K $Pu !]&hqo@ 7/"Ƥg1q0Zׯ_KQYJ.V+eTe0' f|vQ|A*4[~g|zv~:Ȫdۛ''C]P多H*[,*˖6-'Ƭkޢ1:I(”+g2XGJ jL - &ac3@4ب*\6␴eQ0j8$ L[s־CTUc/m}`orJ2T:b)!uA{$ȧ#gġR,a(0C)D\hԦŋLsr*p c$A  2(Tz_kTJYꡀwu;qE+l4aEM0+1RRmp->WAa|(D08cNH,$#0:c( C8eqm|] T溞AAv&aCXH|Qp >$0E9JD0s$wqY =6S4@</Ua *!Nƃq*y! }HoD9߾CIּǷh0aʨ꒳uu>3>TvZuo\BE}ς8fEּm,msHKʻoy(VvGL'5TH)r&^ (#E7lJ~R}VpDZ~lZ(*8dP\ku:NPJ"%"њQ=֘K$0NUH-A N%nJ`)pQƚH͸:1%y*Zw <2UtA[o|T(pIT;H>ߔIp*^*IE M U՗m%s8 gt"*{I}wZ*{T~P&8ݪWs%Vҕ^Еӝ"حJq~OVYe0}kaZC-9씈u\ )(/[/`S u*X}_+`bS9[ΞTmz羉ۍՍ%UJvR+i|5hoE;Tj)mT)NI ,pVMAEsV98I9~}g5Xdi8vEpVY<ϑnc/uqGhǽ{:yFyx~\3l̩uE{;.'J:ci1n?iKxx:A]7Xtg%tYgFyZ?~!] lX/Xqݑ~JO= :S[R2AϚ)(:8?.(P*Uo|Ūk`OȐIei+5l?Ԑ/)vvXqS[ɩ-#Qě01!RFQñ(䣥Lw&!괜Nc$$"=oLar1waWD z}0^,Ó/oTI gw8Yr†2D$jkF Qh U!Zqhz9৐ aЉ!8A YD`.›oZuQR?JUGJZU[!u- sEw]7=tr^+]bbWюBzD;Y"ä"#LQϤ (kYTK#0p2!Mȓ֍&t0.a;"1Se K$3I"PT8a Ka)Eۓ3ײNNk}=?~z9Xw9nxFY 1 ]H HR$)y^89²6GX&g7C~DA;,]3q<ϙ@e_#mr:H+DZ,3q Ů֯itM(62yǶ1Z1.[V|HW&ZxŜ=q̐sIqe4{h#ղ];E|Ǒ=q%eMzz]9ok]9\JmQ(EZȞk䔢~wEuT5C=p,j#U0'lzw]~U`k˯ZЧ_통 %M{{Ozh]n0$][#h"פ A Nk2)8`ltI fXφH((|8hu-&u&CwP!{:A8}6o:YIn_;4:uS;Q*0)%F3᱉J甲xbg)iOLdKu@Q x(wQ22䥘5==a1crNPN}9}'H*@XHB$(&%6]F"BhdA.Тв@>? Xh?4(%4C2J,x(u[θ'[hLD9,ը3CNR^2kѣ(54jf0rSaDeZIfOX -Ae &T]˦iRxy[;~f(.4 lr hn>.)7^ZuDz_O)j$d`&ލjc&ލuc&tE z7nȪVrV*\CFEc][J2Pi ;\ :?d~TKTm[\A=dvK YjO!\QqAdv@Dtxe=1JRz9fޝjA$4F3s4TW6)Ug_i<ݔ"M =C|5NY[i;_i-G U*,Iqr]C ƽ;̅ ^nJk$v[ZCS*mnTqeoѭ:5K`qv'ݚD:?B9d׋b/[L '.qy8ٻFn$r~ 3`qMA?c_4#ɓ.bK(e`Oj_Uz3lnLК[shS (]㿾Nެso_aU637[`7М歹Zogw Vޤ,n*be$_nBĜ$*g>Ov38Yp%dyA ;Jo$kk:RIi0צNdLRJ$|_ w;ÁNv\B\ b˾I`CRDaQӟ8T%.P@&K>R78ߌ8ezOK8 *Gͩ yd;]7MY6;aJ&> AJ& M+Q,5űq0?{J]w <a6l6lCBP.Tkw4)Z(LżąV+3RAXY).a+Y`ާ9?3?i{jf| 8R˚N ]wz#:`j: MYa8 F+c8THj(©$fPeB+ `,|G)~4RLhЄDZTw_=߁|0Ycgek8*}}ZWzVw;S A/&Qhi43}C=VN }F ?l+*2YFx∳`# †^$:&WTF%6z2M(Qu v|dqzƟG_~rBꇇ&qP= P@ .P8x=RCRHsb%AFJDE9Yj)=w5=6OуYdȯJnO+>ʭ)T#֓@9ISkl1$0lNRީX?(J\X_/XQ' wm_v+BiZ=?Վ34}w{u5aɃ֙5 ^}/n.ApK*f$"%F/vw 0ӱwl?Ә9}q֕ [5Uh1<M"$Q9<'j9~'8;CvVߑ8䅳hO9c>n8[(>&mKS-{n n18䅳hOQLFS?nT(B1aw4nݓXJқt ^t!/ExJGҍ0J B1aw4nCHVIprOC^8)%/~gE_&Zu;Ѱ(9!]Cҋix5ꬕش+^~w`#%#qEFU"A~z_}%v[tz5vhK_ Nj)P lgƐ=%O S0륄taY"bAsaW(9A@3Z"Y {:WiӚW в^[zz'.)aA{;3?>L拇ߖ7NluseכW_[ty!+ū)fLT',JJUB1#2ӌ`RRJKAs&Dy."97L)";bTųAS0@DTSRQ*0@Y*yATc$ i ,ndTTBB DJ("-! uF(9 K\%=en #!h)(7\;YmpʣKw]G뽜GN1˘',xP91Ƶ,7:<V2X %G,T:6>jho3QS6OOǵu UJʇoݎsæP*wϯl8%=6oy}z |t  bcw D=t'jHuzU ɆX֎Z EARz1&Tsf.暂TXG1YZ]kD\l YHHu#S"a^jia\st^Щ栃 Do Ǖ HI xknь8>"Fw*m}x3O&c}5:N3$TIAMeɂ'vS%,/H1eڔ$U+9GR˒cBX)8<1ϞAhDg3$3!{X<ڒ\I%K&flAP(@FSz3tpy \}A \12# q_+C *.77 seS>`TAQw[V_xsegf*l|}}tg#(}pB|?{ i!HH7 &rs'mzQ(1a`#%#?x /䠅`_ zo_?1izط}zڷ7(Zl6<#YM( xLǶ?uǶ E?.c'm@-ms[7ɚjmzƂW?)8 8IhqeT%-k0kpJ5Ex\|z}q]4+:@\Ɛ&hE~\ G7upYB)nmYsbYK4cG7%CA9)OMmY,n|auN؅7W_WYs>;N31ۻfT}.9e+h(#/v yjE5{U7@ekŕ4+HY44w,$cnN;HtKS{n=[ y,SR^O>n5넾t4Un<[ y,SGҍb1 4 }G6bM{n.-pmSIubU~oޟ޻$;Ĵ*~ii,TΉ MA=%nvf{z 4wV! uKHX :&uI\ƃ(8송T۵Iv#]bM_8 fȄpNaJlDT'ӄbeEPn]f_$v.L93ƲB4ykSG[afxݮғM6ĕ>e5]!)V6H] R KE_gz@)6^FW=ޜ\n>O.'Eڍ|8=sA~~"\T~~/daj%+6T(`]*lp~ʞVzCRAfuQ?zbW+>5]տ> Bs-69a ʖQ="–Q=!FgԞ\ō۟o z 8c,[NJ lh/_V$KIU'^MJIZ"w& Bq ȥQDF۠ E}!.)n4Qx3/%Fw(Q.wG+s)K hhbXi|dE^P$ϓ`&Q(w3%#1Ҹ>#r'Jd.a\F&onOWa?_5"0[H|<ڹLˎ" کN0gx7۔`.|56hr,hdI8WU*u*7zq#=%ْ$m:/7W+i=U?*qfajyXTh/l*O$Lޔ@+Vx1CsP1ۨ90`*Gr9# /+5`m~(~}ާ&_sYYRwi|< :~\t\<nMŜȓL##^DrmS~>h)ߩ5l%YyMBXTe\Vo Di(]P\3qg֖FSw*sd7l`y+[&knH=~ 9Jra)əY$G˂)JP7RPr#Ak@zQWiQBP%St:}9Xp׃L ݎKEF7(ϼ~PTQ%Q|ݙ W҇%{be5O5Ko&X܅‘j¢.X]CcيiM3zvK[Mrq\X[FJLdq}| DФ MD8l 8Trqh#'4[,DG m҂ ˊ`@f\pE1nQo%~Tr~3l_QuTR%X??^+JJb0H,JCE5XjICar;`o{- ؋j=C dac57Ǥc}#75a,`oSD3dˠo!ɞ' `yy6Jp- J{nD`o} iߙj?mo?22-ܵX4}Ѿ="P u[9J"-Uʍ?3 B3[`)7 ^#}}ʍR.՘Qj )E3}n HyȜM<#~Rqe$>{xEr|pґbdy8&j0o;qƀxA yN32b;2RCƻ!s.w rwX&0yUa̒{6^B,*R gCͶ{`6KZkV1˓c?AQn'LPuqBc퍄#ޯKaoZGY.l1l#s2n DrI*BXίN4.r3Gхo:Zj@V'_J~^K XAQ[e8VVW^I} v`hOE5I/\0`7__?ꆽKcȵ+(}Hl+\wo_D Y8}W+{?r JrLLXL ,zу\!2a$呉2) @QR±ǝHWwU RO[ܬ:v!Jz\Od슐}Tw?gzF [t V#Af8kxR#CYi  B&2dYc.I'}H`YŌ>:Az2 0z{Km<'`90M :3̱TXK f9H-OE`#J"E/}L)z"髟+RgLٲj RjBo~+Ѐ' !&Wh{󗉛܎g}Gd Rxq48Q^oE,EZ5_^IR{c.~ZN HQ~x7 {hz).xyI΀* ')HPHiMw'!9%;f|LyQh]Sn2Mov3陉Ro+3kGB\DTVZ K[nM1":MQǺm][Zպ!!.dʚ }nM1":MQǺm1!*1AZ6$ELI m7RODqkP-ٕ=_% ƕX2P>k%V*TL 9Wa&\FQZd:-S,:i*6~U<1Fc V[WK+@ו̀q£y}z2$|>&yC_L #X $)C " D1 QZ'ݥeJz*=gJ}Ι J0&r%IeA 61'4&ׄrk0#nxUmU t<Zbg,ZI8H@(Ɯ%)i%\hws+sZYaY5!(iB,2JŘٜᚢ $ISQc`Q-kۘg4XLo'#B$N^y0ͺIm5s)%.DADߋ~Zxz8կ`mn3L,fy_wۯoO)B'p{yI$]w/L0fK7wv8D[bS/\]knΔpjxzM矪_.jH #[Mik2cU*4 f1S C`uZ`mjU@< 0rXpl>$s͂"*!y-99E}q5A VZdnS{qZӌIgحMQFav`%헔1 rId +;L!##!S F>Z g@)B3^lUhA <RO4u#/]TKHHD#68Xi̵iTJ/bT1Ma}^FW,G:^n>O.t'EGoso_[^KOfYހY篇<52 )%z~rCif;)z\&CtY5oNF;ŒZǣ/1ka'5@ WB<؅'poɉW -= ,)'KF`<}frDmdGXdIy붵L-I;Q^+$GʜZ'L?zB1˜IegS~e}+1~44y$jyb-jf<#I0D4,SQ"W \j,oޑw${o3+Xnv`pEY1qeyi]N[*ql=z4r)۸,@޷9j1Fu֑hf7sr.~ dF۾ ;Z/bc0ѫ@[zs-%︑gP*n0R&J* 7Z02Np(P O:W$}]#QJR;,Մm{UDm?/*ت@1 rRBn;`GOEJyN \^(= (B>Ęif x\e(sUQZ t&E-`3-LZ QƐ{ qr1I b֜ b J+RQRLRQX /L*^ЄٻG#W~Zh``3!h,ae4!)3߾ūHUn"Ȍhm+T|*$K"i%p}T*ă!&`N*#5ŠP)y8TKfJuȱgWQQ0(ˈ"TqV { '`>ׄgQ)s%ERM)%7PX&퍑>B\R0eUPh̰ .Ùu & 9lL <(9lbY k9\;*`(1ew 6 g=PA+VaHR[JTX0(As %E$Zf<_ƣ 0p`SV^ 9d}Kbps'd|\tFt.)}ΫkR >LcT# 3$&4հ+p|?{==F}e11z)7pa04,7p3w0Hv3vC [ިBmyb9%O?[^;SddYIpYWN0O`ai<OM: I V)o5`k4g<瀅 ̩REIi1΃& 6Rqr^8pzC6J毃l߯bzD=x͠ k=[ }=[<ǽ#~hX/.+F^!ƤsD( !`E8T2D{SDOu0\Z%464>f¥OacR6Mq4 .Y܊HEG.Hϗv^;4:R5X3dUA(JqGW5c #c&pL+EaWw-ER ηXP.o;Js& I->.hD.hvCFƬx2FV4eX(`V4*|/n觕m oL[žɒ&Qoč,V?ĤB*o4MVRqb|p< Xu^lhkˮ&ZhmF)W{g}ޔػ)? r( Jx )JLmʍ<:$.Yiyk`mj} !4ClpA(lJUY4xSTV2[8Te$bbZN)BƒhaʬPѕ r!ʥ2 Ee~XZPO ^(Bʯ\; (h+wnKr cc'{lb ,JnnI!Sczdb$(A]hü^ܖe:5*EbF@s +O7\sDJET6!a6$9ȌNjzϒ`IXIPTl؁_R^'.^pDRkh$Z/ B {6&5Q١AmM>V5nb- [iBHOeB H@s[CPqfݚ Sݕ^5Tw5)Q-[sIԠZS%0ׅ6vz}W?.#rut=E8h4KRy5CNj K%׈47tՂ%!.(q؂?UB!xwÛQq@xu'.Y)%%z4޻`ӑ)}[~/.SetVagxŖ } 0k.Wd;;U+TxZʦ3DJJvnm[\|=ڏK0O0lY'ξ5Q׌7T#4nQ\t:JdZ*dge3縸Da2ы1Y iXmiEpyHd&#dYqƀ"=\/,f>Ҵ׾Wܴ M;@J:Xh_t P ͵ n{ 0mιŰBF s@,gNk_eO[]7Si( ݛPaUM~m@5JҐU}T MP^6~:14z 3 b23& kc> Ei}fiG'?Cw,)zdׇ<ջ]vvl^5b[fܝLP/׭oҳs\nH I\DkȔFkݭ ¯AŠ}Gv9 xJs]nfJj&$w.d8EeX ݜWT1˞'_ \2b*l'nԘXCW]Zfd?y"Y_i.dN vShr@Y{Ǐ6@e]o߃v4/8CBl-gFy=N0\]]~.R1k@K<YsGס6Փp|yL$0<(g<ڲ(N*{ %J%%ՙPO|@-2ޮF*~.g"VY^b'F.i CJY?'tFs?a2dkv#y|, .3ֈϤR$SW˧>}]R.uv=A'35pnZ{i斅,mdE:KŜ8ѕ|-w\3u"ף6ztrMhWWm85 uA/;?t~Na !Pר>5|~/JPTSP=B [OE`ł[TY `:sj6@+D.|2Ԍab 5YVŎ{ Y̡`zx:y~z WމT_կzuLZ·] p R 'YQֻ]+ !؀U;B6v4DE]>RmMPs|WQ pGP/׵ $6o@kdX+*-;  4Wu1VcZQɨ#| HĬKsi୞9 !|!oo_zE2B8QZm>|f]Y6k،Ga GcJ^LgYWw)N= lU۲f1"E1~`Ⱦ&XS/|lmA78 J)PȻEa܌?Pν|(3r38kFZWָOxW|8? rp,bߦX: {VIlȊhߨGaY/~mƹgCp6&)?l9' `!' Qq_Jt0N4Xra),5[x)`bX~f.Ţbx J3@,F&h!g̢R F " K{_2YeL7{ [aK"lI-ögօx3jopO>a&?q()o6zkFQ>$$ln~OL\ J) wLtԃ"cVÖo7Ü?(3tl,O#U]v;$SH#oR3lFP.D_cLpR0ēɧ2J扷"Lf :8#Dm2i'm\y} ᠊[˾2)VbI6Cj|+jSj2M;I2 y5CVޥP. Fl, `CVQGןXRO_vU8Ę6Y>_Y^>9h( G쀣T BcdK BPPu]2%(s,UFN}V!Mϱ[E56aWWDDy=iU‘G S(M  {,\Ƌ@x1!(, {TH>Qq?B{1Kަ[YL, eS}{5$tsWƚ:x+F xّ v tINUJ Xװ)+Pc|p<2u^!ʻl&(^ R4\9KCp#)d^9rFVYF7Mo4>5'R/DK2El[Ť)],Id~ŖlR~Uˣ Tw)X$Lw PȌ=hؓ݀]#dMG֩~6H_P {!g8!(сǐrczg50ԕ●#g%qgxkd.FU#C_cM[\btEQWsqLujiK߄9!hP!ۙIQ e7([׼ $AzQzĖ6u_"~St; 9# 2G@cDZvj50!3xk)霐>AU0rBz#PR"U= K|o|ƃ%kHbf }Fn=BO~Ӯ`RUV 2%QYцNM:Ip51=m~OC^Qsx~a%zHCk%5<'iqTsXHL9 őjuC(8c)T2Ą@DaiRS4NSrV3w`N{ mݻSvhZf$ZsH7^f 4,=ӚW/-1<"fg8~.M;X4&jsÊôH7v/3+YG.ʞ78 q7gpw;y.A" V< 7&x 1]<,_\{O?  8b 3tR8T!(M̞) i|U࿩᢭kHƉP2ptiU%ozk{&I5q=FOnʇ`":W! .^bXB_3>swYֿ ?8pxDA.TM0*dN쪅t;PKlfA*+iȖ:dJV1;RS!;D'abuɡPG=HVAT=9T? ԸuFe(Z _͇6'T&==od>P\w_&] vl3s,?eDRڐ1\XMڍ 0ѪˍTj'gJ1vn.4HF涤 PaH1'0^wZ)_'B  +In+, D2yp5غcƤʷt"8c_9 Q* wi:V|EJ9E dT0)OiURDB%SDE6] :ݭ3M'MMÉ/^P|oV7Wgv^l^};bptPDv>(6T„Š$a‡FP>EH["j $sx\XJ+fR-KUf:3&CE΄ڀQ 0_2nF}36.Jf63Ra9r\ ӖU<0qjq(c'*aB9qYx21%b: 0'9kgT3l+-inL(fۜV&V*ES1c`9rjv+Oa\%FpceL>! ̈L0q\:sn\siiy[M9t<,8:5j}h8Zhaa$Q!ϭ8UL=$:Nd('[&ın Y-E/sF|Tnzb[ƨrotE A`%o4Xgx3)8w+swm I&? bXWg[Oc|,WL7|0A{/lsߞ] ^I8!u6lVͷ}ıBBq ۢtHk)cV9la`e֘=lp,{մ!XQ͆KH6Սť\(5:1KhWS/4<|9(PKF{U`3܍j{ABS RDofŭw;F \YւX#Ytͮ {/J|DY0EQ VRp봄I]mIGj kS!H3z:,M&0pZ8}bByiuj#X2uڜ̒MZ /8参i)k~oIz478j4P(F!4'FgcR++=pmrdmWj /KMTv{ޟ E% Ԃ2Tss8H攅 b,Qq ܥ&:u"m\x~yijK~["E]nk^- Ww)2)2FQV*`XTmT"|ழԊVZ nɩ-"fM9}ImIp)>ُ_>ȬX'٘1AL$"C'ΉtI;^d1h9A Z1禋½:qdhkX3E!]j>:1${bG=H+4Oor0&[&7i0aC%o:tk%aMZFL)>*I)iN`Omde=J/|& =_N3cUoQ^^^ס 7Oyԧ F8AFq;#E*IRXT[)Q6|ߟ>~9yX̋L39ޤ.' C _m0 *9Yݸय़Զ~yA8blӒQ$eYFUhYrɰB`ɑns-U(3 uY,R:$ӌIsR +Lah,v0dB/9@Q)gLj5s(g$gTp2eK?" `V9w))`1NM}Ra`K _i*kazod^Lau;1Ob KU*H & D0? t$`\ N8.Wc7"|3/ZVV=Y4.T k7,yX\?Ng1s_VWs;Ye:g?,6D7Yx\f7ܮn~N 9lxdTL0W,7?CzR5N02j"e9錧F9 Z $ӦKWϳ{f<ׅ [[p3Vî',vin#nX R[N5wE(j0!H ЙuLpna˒O?,pc Q)RL_9XWr* msPKZYQzJI)5X!m 5t/degKµ\~4qAF|_%+j oGs+YP p~(?1N40sykmDS/8z p[:`b-Knkw_Y*xqJ9Caw\%Ktd87FY;^vY"%-cRYn-!9#:gy(3Eđs`|RZtR!&bEe0Mk{flZS&ә3h۸]$Tt4Qp >~7I;9|1x}qwՁ\`Dpivhz$n?jvh-FGijh6wZγ :Y.0>ޓ15S'$6 ZhrS!fEG%/(ҌD浨#ч\(4_OP*Ƽ 'S_%Wba|xIrrW+e~z_- $;̕qZBI@8-Պٻ6#.!ApbM~!$/~C>F #[L¬ #I9sة`%!;OCZW:n6|}d+PBhʍ͍)TyrCH`sqo1Nk+ÄcD)v65!U ˉdN[[,˄TQD <9悊+,h&c gMd@tsq&UO5in`UO d|kF|jl_P%La GGikLۭb.h*}wfS>WکM|*txhK=ihuw﹔ޞK-%3lgv:O>7'Ͷ)"IwH/Y}5Vx'_?ݑ^IpW-mzCd{A|xNqJ7g㫻e*@AWs{ݢEZ07:D# CwCxsgpDiTkmKqx+r%ZeeI0eۊOW{`Vjz?EGg[4k?cH/T\#RJ,$>cuY-'uZbVlYp=:7)6_ӂ ̶zŠ𣟗T(nB^6RAK-ָ`DVKdKM7jIx1p@.TTݶԚv׏b>N>\unN/۰nlғMzZѻ5 t>w;bN#,Otݚ3%z6,䅛h#bm'5 t>w;tZĮ[ޭ y&Ȧϧw/m+ѻ5 t>w; T޼[s6Dֆp=ئ$JTpBW`%_~1$6mHo{ݎxT;hnJ}F0a{P;-7 [nϲyn_ރ1mg@6UPtDfǶmQb*=L7~] ̎>nnHc%hjtԵ⦑<}=JS"M"ײ6(g{VF'4&E[]]u9:M_4Uv͎-yK+_43nxTj7Ć˕*ԇxvr?Ć,Za< ւ^Bޒ f϶cٳ~o4j4ʵ :-J<:UM<='$Zn'Jt{KQm'8Sk2IdDZUNZZJS.:\XI}g9(rOGkeN?5Omx7K% (w$3BB1!Lr'uΞx#_,<`<Z"VmB$UA+S,@*񨞲tr:igf!{ gȡpivHMR$P _o"n6h׃%#5Cl8[2?mݑk-ЉL d{,x}6.qJn шi8Zm9Y: QXs/PNrNw:8W[#--e5nÕo\Q`a#[ #%:RsE7DzVhQl(˱)IɅ`Iž.^s F)Ո:# ZZH)Ҥ1b%ȇo"̝z3Eڽ`⵼zӻtZ{?|M cȻ?*]0+n?"kY87W^~z!__OqV S?FO韙,a tBi{c!hM$ZD?&t㾛J𢊲Fa$n-);T RۣbNה?ůW?Fwu}qv5D>Me ]n?/kd!*~m@Cgsy/53Lm"gɓ|~>U*~ !r׊1ʙ٧|byz5|ݽ;e~]Ypjlÿ>p)' }ƒryOfFEOtFʁYZ+߇;g<5]c Do/,ɔyO1"zV_I&aln6lZ%H#Js& 0l5  x{-W2-_zO-G*Oq5h(`O9ދJ>d =2(3ړt=7Eu`n;T5E#W9Ye9,$0 -FTtD$vIp"ZbR%v 9c2o6CAzꕵ Rʩ H VW;v]OL9)1iiҍR^5SBTS1<#;$ +"kt1 ~L2jooג2xI]Vomwt<'gn80fs{UV"iRgL[Sf3sPQ}+mT-QzyO2š?.H+OBDz2}qv: KLDŤ D/3-#'126ehՕc+@~(&cjg:5eg 9ѭ:H4[u/;Cײdgj%ܼ3dNA&Mc_߾O3&A=js!r E?W@~3wo^rȀ ֝ 9fa{,r$҂wt0=My! _GP^O0_' ~f} nY㙙B3?^2l?vO7ƓGx՗<77pezyNk auLܛwNF_ΣY"l1|Y|~):-Uw/1c~D]c9Z'J_ӻ?ՠՖB|hZ(m3nGU(0gNu$p8/qz&}Dݡ R EˋM7@C?O%XZKH WVi#$k,#=&0LEFļ" HOTW "4b,5-8HPTڢҌtFr#׿%\6K~G"-Ώ}:Q&<|Q5&M ֳkٲm]$k1 ?CUm>.Z!e(4e$+RF(_",|1H>}rGOXx0Cd(C)K;|[T::I)6i$'%0[TKa%29Vd[[sS*"D\SyTߖZ'rKu:u~Raߤ|]\a:K;~FF=])ysMtkc6_~\9`>c%yutH`8S6%TYMvv"<~6ɽT]f.t-߬geJiI֖,;7""r[t-cޭ)}FvcZ׵w+'nmXwne}\/HM0zFip[󻋫/fvu3WG_5.ع6(Zjq3&7E_[- WAiJJK/*Svq܋Բry8p1ؐ릳 ?\24!foYHo%b#ď}w6ݱfdi3& *txv==E &uG˯:di |05OBUGо`3S}GC9W!SԁX]bP|o>"33ȼFO@)evJNvLnf2c4˳5T(MjeJQ%PmQ3d:3P Cr//7ˡgj^/rj-0fp:"+Eڇ%$k`Š]| \ it S"!jΔ5I [c+ x; ڭX huv]j]W NTy)(`!yRk-/a{SNh8V5준5bQE-! P+,w8 "bʙLʰpgsBGbkPQa$JMq˜"hEY4 )`cVIExA3e1 LXF5Hd)TD&P%6_00Qa, ."e2yp[EԵwϰ~?fĞak9Hx8zdFhP!WRG)^H uBqASL}A"p".jcI3.7f㞛 ~]QY`*>!^oikXѼu^,4’&8,9.5^Z0pCN!0!- ;zH3'VZrԔ`[''̹~F˖ZzjFeҜ6H4U[1F3.qFL%k^ڣZfu7mR9i9M]UZM FsOSDzGI]IWTe:^Q'3]QfP Mጭ-RjHyFJɨ-[a bn9Qh%PM#ɫ0FGQ XíHBN)s6*z+搯KD37Vs-LDPd0fXpW=2&7K}qH+Lm T qd}bVcR.B Y)* !kNh1\enӣMڏrV+E~ʣF+&FBf`?T_[< V%JeߍՌveA jՕxaut%Bu5%x kIZߨ{w#v_73/y8)oGwS {m, T/̫+7Ym,YI EuDl%VOm-c.Q^: &duާk4Z" {9ke|"hbq/`QE}L1YǺEkٴHpH/r*56kia?Y=ǧQaC/9o$D#1 Dr&d!kb#q^Q2 S96&'GP|@Kڏsn2]vR$}iLji]לi]$8.9|~sa̭1jNrP;mȻ|D9~e/s Us272ȄnmŽaJ1O˨>tРJ[52yn1d!I>x2h7Ⱦ;osf7Y]&p.m&0,SQ {5zڪyP]k>DZ'&6:ۺ< *y;X.7֛u PX9#h`.pu3;s$aݷ ,*kޕ(OԊygVtlHp*=v -u@8K%`t/zHY卲L;* "R3NDd0똡8F#r")V$;7<O>s y:s:{/qEb3-[_E^&%i\ R Qi^QŐ\U!U&x'Em}8c< fN_竛)DW1 -h7&$6+ySHxƜ)E\E b/uAbOhoQڛ3R%*2Q QiM7 D@ ,@)Bqh0{% LB1m- SYXܦ!ovMCih2xF$lM>8. ZU2Zm^Uj?Fz) LFi$hMXu ^3I)=US:1PLXEv> jd RN(ZlRF]pui\(&SK^¦+eL "UB`ϱ1 ]ko+}YEҀwM{?d[J(E{F3=/ 9-[a=b<0*C>@yWs ;:D)**!0\&W9:|bsq3t#]jma1{y([w.o&F1RXmDb b!XDQχydv_d~ :@r@&9լ;%0hƁ9  Fw½-fK*Q\]6qQOjj3O1r @kL+!78T=&p/Y`-X"AqA~-T#bY/77E6e7be&av,h⶿4ؙs4,R=kH%;3Gs ZA C#%($0Cs"˅#pk93B%ZQS` %a"g|2z^p61pmjK+GC\r*G3kڶ@ylL :6Inm7*Bjm]-huK_끬׊QźU IhضmLblAY,O[~\vZ6"MϺIfuvDK%hrĀBx6{7F)-g=M풄IHk}HCH?<w5z86z J$}cԱw8U^\2fKT͇SSJx-UO=8lM1u1 NfC/vF| fhv8EĵBC qmTZ\ڒDэу*۹qpH0Zh۝Tߞ[WV!I#;ŊGoa]ĤҧXMZc2*BxA}?&k`P"[) B,P{!;UFwvJm0{WKk2ӁnU6mtD)Փ2*@O*)邳SWzmJ(sU:ZeGteuONtTOI+m]榻t=Q8ؙr=OMSWi5!TL[][-nB(ϓoB "/7z\ɽ `Z旕2.c 6r BsVReLҕ«失˱1BeNO^J|^a9N噗e"i<7 !_ G*͒#Շ@܏/v{m$䟣d;IbmvB$v]<Φ8.]<|eOf:{;UT˫*N {ZVۖ JEN.@EL->2V!);t$$tĺ\P] H0$&P͹RhGj w*p ?.$@GlԳg179g4wFF+%/bxiQyu WC;_XܽU zq'a?R/+ЃBN;V㍅֌9³)ѳ7) Toi&4' 7/$LWIL!WP_=N/n1O:|v %(YMAjᬸ^5TșRtjZ,5LDsy"A>\}áN.!sЖ]ٶ)PsŨWjA`8f"QhqN(0백cnZkpq˂+M1b4XN1'uk猏)rsM-YZn!5hVS)!Gc!P}A)}a0pENKIFm 4 |}50ŷ1v 4".#jFx]+I2p! [^F5n- r5Y 9ς[iw]Rv^Z/vWiV[ނGV)ҚTXKM(iSrhm>x-5&UKqֈ`ʒ+_T߮kA۪[/_LR30+Cl>2& oxQxB+]?Nm>i*˯UsC YR~ *g BrmlSp-Ȁk%%;ΩّhU2fCoeiRnΛRO_TsWcDE!蝬PRuPE92>)\6$ٍF8"=e|w$ w$]H-̫zO[6 íG#n%s,1" BRJ(zO'1$}#F^@>.ǽf8;v-;8B oܕsJۇnvF=h6,t 8t0ySNO0OBIj8㿤iۦK*V̓tP}7vyߋW~1}kGJY7\CNu kh*I临Q}HCTB!$IFv<7e| vWJ->Dеn_w8\FO>11&xg]>I'O{ 2<_9kdLg|jzH-[>6bb*%nOCnsuV+s%BdGI. D+]8N *S'i^0u2դÿ16:6J'ՐYA5ur6;]O$x3T59R*:3^LX2[QCrنYʨoٕSaW8ʵHf3S#e[ˊYhcsD@7&6PZÀW$ߐI^8F糛<~\C:O[yq+2n]ƭV޲|z})P\Lp0:QBAt1Jui6{χ2yCh̙QfؘNN>d]F{$1 ۙ3}7/ xd9]4ajlԺ/zDѸ+.=f4t/d8_gGG37r"O5U[=1: Lxׁ陡>o`)!3,\df X02 yЙ)u2 ΉV $Z <=Jit5dcu1n0ye7~]XB9:G6ͻlY"09ܪuT:ԈehܺLx3crQ{9*>ChUO]$4!T CAcywֿ1)NV)8x+Kë?K΀Z}u^q,hg[ fT{GDGM-|xbZx'RgRq4FsSZ8TNA*ګ;*1ǴϙcJ 흌r)DV9Kw֔)e'FB4+r¹7(7Q렠TS8qZzAB휚Dg喵Ap>8ᾺD󬽅+7Ԩ<(F\y0O@ ;7FfA(?ǟAТS+);Ғ.0LzkH G#u7a.c sc\Vc*ѱxk6wΡr<+% SZ(k7-dob}uDcVDsp PNNB+Es#WQREQZ9'][۶+*?lE2W:M*ɩΔ59V6D(eE(0?QRtF Cb%^2(FcPC(4+/5qLY"4aaYw]y \Dޱ&{)":8$T/E/RLܷ ~I3%,zQ|D83%YM~Dk ZF(:%(ٺ]Q \qN,%h |fQ Vq'7lB/Kk-,gjWj&1%k) ͥ4[ պ~\/;RP@[f<0]8g^·Ki ^+DrjkJ礔U!fXT~pfkpt"Na+Z'eJ"YXLI&Nv.B -$$P1XiCBpK$H6g 3$(ƤNi}8k) W֣P ~ָ?u|]!<&wZ+yX'Zq!x # #azYv;O)9r#&ٝA K"7H:o3 GŠmdQVUq\w&͋o " fDo"5!xƉVJn4-NsaPP^ՋoݬIdXr>o6` Ϩqi7Y8v_O s"J+at])l=zivr:!vT̘\ʱTʮ>/y˪/Ecu%pToo'7_eKtV {7Z^/ɿ~(RN-Z=*@c[YȞ&Q߼{qo)Pp'# ru溱/w#{Ó`gv-{F9hK.аMv77wzU@⇏~Y}{*&= NTk&HtO&??~޲N7M'OA ¿C[9\l&9nQ闣,{Ƹ }ΰ#5U7πXWM`CJvQ}n 3m{. FJ9+8'3Q9H(t6BUpv]|[F)Tak(֋&Ev߇_I)bOH3($D9]ɎՔ_s_MΝRk(4STp!>2L$Qt! McuJPk Zpz~bR.͑$BTZ彫wZ<[p] 6\9[|m:>y1E:{x3jRQQ*XGQFh2n-eaWɹ&I0ڹ~&=7CyOhLa"GognĈN7h93[zHև|"$S&c8xh -IFv(ڬ-zvCBrM)yv# 8Bz1R+'„KƅcsKdTYz }w%P--ΐ~ïNZ({P]1sb8 ‘0xK ,7ϰoFpc1 HѰS):XNL ;7oy8xw6V o[chR?|?ٹxOh!PK?tܪ@[eg̛Ynݮ ζvXה!ɔKh`gg$*dIKе6-@bg+<1eѬcZl'LK0X$㍕Mg𴗋9z,Wa Q1URSƩ^)Ѯ(\mWZ| "X$p,گ,>|?/]v]'T,G%bk~~3[jd}< gղW3jI}$x1@HW==g;5{z)' V(HWkP3$4s29gwX!jT9G( fjM%֭"[P?2 .V:BV;v$DawJB2`6|bFL0 NR1̽1s8S `j(úi1zF`$h%S{Gz}x`"lU܂f KFS#p9UR{s )5Eab%8#Qa^{gFsA+eĖDʢp1Iő㈓/DLʆN袄_2+3i5eXd+/,2`"4F;'{g[a-j=)Mb\+ +EFI: qVsx)R*)eZ0UXBք׼>zu׼HdOwBfL@D+@*V#} d&%=3K!ٗaW*pbel)XeD؈^/5IKZ@륢J6F("by4P8BFyXӏ07 UB+?bd7Ix <@4j`}ՃE7IZ#7!`jaV:M/Hl$|]։N SBۜD1K sYwI,xfdA.?! CIzk=Kư q> kX0~Q3*$wǙv: Nk><#FGO[xFTA.káIqCHW z.=P73㼗M̨c{VdGX=Haʵi֯ X>SUAq÷JFݚ8|wގzgT~97weƃ̹?Z'S=rG6GF+ 4D_nV!a<_83V b{$cr_Zw6q.ȘAg2UOo1 ?eH}Xx`zv&v<^x1&,|QC;_>F9j&%ϭlyhp|9m3h/RV5><8xGe$ 7&o:@w?*ݥ5*m(ۘnH*0 \yH$G 溿rp;'<ύ7d>RpcIA fNOi  |<~9nW/"8{q,uΖi"9&W/ԪeW/ w@rSʛ*#wJp:V_Ixծg_^)V*|X-Ap Y*KKC8q@2S^Z)kQgF8\/t\;[ Q33}8_pr`?9fV)$}rP'G%2 \jxp RBz\a.ZZ)OZc~鰆=hc:7FFO KXȌnt֟?~ Lj%ê/\>xO5~9? QIF9?eg="!!_&ɔ$1hT bD']WI74 W1$* (t$mzlf hQCC뼒S#f!GGB W#qf7ћC}o< /~N7Qh%D8.3SRX>>5%m. G39&e7q 9 nL8t bٻ'DOр褲(]C>t/Љ7hcg k5ZࡦI_F΄Wd9τ5mA\܈_-h_CY.ԯ2;@+ Ɯ hk\й8p=~zN >vsPV'?!;w0b]]g62r%Uϐ_Xk{ ҞQ,qkڷƛ{KUܳt_V(+^k3|3Mc_u{ ,U ~|=<;z|GTչcA:?hɱc# IR$Qh6U%vSS-E.WkHNSF;B-(AW[Os?,-3䌰qM%|l3m`NԸƢFۘ#ɑϢNAw$:&-9{IQLIjA`mg8)sc~*2In: <>FH #i~t۷ƣt~t'aExw8MB.^%'N>;LTMkA)cLQy-Ɣ[ǴKv 9apKRmUIլ`IؾN0,3B0dJB^&jJ2szlӇ-&& !/i0j`Z'3;j^ksY. c4|)mUx|*|4]ئeRT]fǚ&B{ہIZrJGsZ,m& N+,HnZwemI iC|艍׎xF$ i$f5@q @(ѕ_VVfVfe`nbGR"znr_t#$4'Pxq ì0^]DZN**؈vлSY}W8X7tԼN#qyWp -t1SV#65%#3wT~.0I^` :j.5I%"= gDl18҈8ሉVEf6θAycP A ThёnίiNH}0x>9btG3שE.v׋L^pZ@5oA9+ۍiBE6LsӦcT{TQ͏eKmCB^6{vXSnMyPFtQEѶ+M5hYֆpm SJ2tbx#~QpÉ(]/e>~'ZHhkuv$64eGoe)?bEگ%ZB7}{JB |[q_ קdb*v-iӆ=Oa5ǂdGjXzĢ aՊ:oZٛ7]j=BFZvݼ-+Q.zϩT~|5!RHلb.PZ`ք^!zc[.Ҥ tpkp&l>b:a{*5CK>^r%FQީ2e046|b3ͅɐ7AtD81~L,Լe5~mXO}m)P_|P`&Nm)(z&8?ѶOy>BۊRpLTJ_ JϨt5X582Β`ns+rQ5Ùr0=EPjcX9 KR C,vnvG3 ̤W2ϸ>OFAP yQ{iOwu9 !>JEydAqE)~"$%'S, 'ހ>ߗ(Nv_sRyq3@C%XeIt2E@whG|ļàpE1NWk5" ǹBkЕ4gc$hvǏHrky2 _ z*8JI4M?_k;1m'k7*v0il_ĐpbV;0J^"O՘.,QIL戱ӜgZxErue#= 3TʌngD~ѵFDs.K F QŖ߯K!cJ-. G}nmmIPֳigOfmXAbqiw紗^|pE䲲!BPWT? u2.Nn i2wQOYTw*}+\lZHhX'}a;l@#P u..cIEsFNE% xs .t/nMTn^e31IfA$cbC#1J#eEM:nQqD礔*oJfo>`*05 AFӗj9a <`Df8PF2Kir0 ̶`]k]>@`C^ߏ'/Q*+g*&L^Ϳw[)婇۲aj]t6ݸфm:K^3݄7 D1rpHDMҎ &`bj 3"0Kƣm-=7IL}4><F: ̅& ǔ-(4<M- \cj 3-p^i 7ٴVf͖o}r |{&l;K,Ϝ0dL)n{)sZS!kTwꮓdI3-|nSt9aʇnJ{'r;4K ]vgLhg|pB)_y łvqv?\t.[t.Ns/x?wxդsx;hmaZ P5U"^i7. Ӯ;݆s]o}{ɕV8ÂlOyXʏjVw&\^]{Nw~x/ڪ ((]g*`vZUFy2%BpkB{ZkGZen(s4qUqOʽ2$e N!} .DC-><09Ŕ7 )sqcd"/4QB@^"xc ""&4]AZs1[TBaN WzW:oQj`wiNH& ~x'.$O>4ϭT댚t/4T.s5»$& \]4^ }2o^`yU8iq*޿xBj1ץd}%`u"Íx՜( ŌZGQ`#(mN^x5 PwRK|7[(580*w?<&J#[ro&!Ç><\DDpV,3?htѻ+g^soow ^h'3qۃ?WB&5/K|R-_&+-bpeް+JNoQ_Y; \ ?oRdwW/\5*%i毤ֳEHH#)Y DjصXQJ+%7 8$ۋpƢAppRT 5_QPIs( )>߮RAU\rLiJ/W/}N>+FL}?bLMc41׆`Kl}uO2i].kOcOɟ$,}eLbMe:(f־]FK Z_JǺA4jA{0j9’K1"3/U4ݏ 8UJVX6=_L>f`-eb~А{[f0a^N9,AI|RU"3p2qX'TS3n8M>/Bk&lq嬲0Iw6J%H{L- )1xb GF{5tjH eS"GkedVrfjʰ2SXda1K, ܀jVWE+ق}+Q] Kf`aa[yQ /ESGymυsJ__*4Eӓ_TcVVrnbNk0NED~b`-|G͓G70-`: g<65`Mݍ?{>64/W`y2sӄ^qh:;0UK|Zs:[ߦ? msi1`˷6z»0+@9?DkIB^6)Nڍ`%OA5Ai:Fv< 0%$[㉖nmH F2[Mj؎4v;Ai-Z.k2Kʔ~0f;#ʅ嫗yÎ|uVKB8p \6Qw 7&_3z7X~Qg (l߄}A!!*uݫ9ਏ8c6zz3iݲh F(C c KF%)+G:ư -9>XeΫo[Nc~#s32å4SYc1V"hK|;F^8G&1Zuk׭_V~WV?Vc6kShjRBy  ̼5FPkI~IxA:4 [EVvT=NAVG?h惇Gfn! q\ҌҪp y*+V$[Esw qw{LT,+dF_+xrٰzZ\t Ft'bפy\3픴Cbf3C}Z3ŖYBfjIɨ)SK D)b5vJ 'ɋ4mk8qT\ \6}'U΢tx3x)cKNa~s$L>id6Wv?{3E4Sʥ*U=֎IMCՐnFk,04Z ځ1Qv0^=cfƚ$Cm"sچ"ٽP3])|(X#B?:|jJ*8kKsжmB,۴]ߪ[}Im3UK=A|mQ?T&" (D!|5 b!BCa :ƺ2XTvB9"?sʝ[,`%aIʅ>T_1e!a0 83/2Ĝ~5պkt| 2)'ʭ< 0Ҽ Ŗ+aUzW".= TW:@ЇJ000=_ tA.VƄ.$vek,HR\הgq5Pϓ 5n[{MY*]=+ Ʃd 7W^*δhx=)v2+⤦WGFȽ@DwֻN@ƣ',^I$txC:(L?c׃eY<8ӭh0HMORīflZZMC`Rk6#R2C`' ԥql4J0RTꋧT+\/!tTȃSƭ2S4|'IGֱV.?5ֻNGzmjPx * ڳQQ0l{4KlKNFoK/OJMC96?%HcscEW?Y{i}gkɝXxQ9_l:u]pn!ч*d96:hi?u+V끮AW焰Q> 4-RQrWyLi@QT}%AeD YGzD@ >uP NkxnQ$#ٰʨUfŎDQ-IJMLRyu'm UTj%@hïҖHαkH 9߿b|J[bW9$yUqz]4! t0N…8<$|Lm(.GVvԗ?rjzdX{jᘂtWli*\4 %2 K;g-Tð_2$=0j zҖ*c?BX=\B Tޜ\fs+Uuo &#6-irQ`_o*#Ç6 r|PoS)Uⱺ;V-PSpzXp! R;(iml"hwe2@ mi+ZZk!` B" B㛐hp} 0b*pD˱NK)W/ yCq (FDʰKBD,L0 nH0DaBW&j_]%"Ԍ~d%%yyH KIl9Њ:7T_Sq.@pG^6&0Zi.a!_-W M)?hZHhn{1ҘDCJU1#л= 3DҾg|r^,&jt^p2)TI̹o?UbhJj:д ntwg I}JE裐 @3Y} $s$4t{nū:N!nm߮ʿ# rM1\gµH6C{zF@jNEB`B¢N_CR 5\ޯ8%m*m+1/P/R>jı+ P 804\q WCSGlD+GDRĵP+%%Rta TAE}.PP#%#DXڔ*Rh=`NnzΆ>Z@>+Qu} sL1ǚjE#TJT> նGUL9_dAt 0n&eї |jXO?20V%]kd:;O"[0HiTMv1>֐"#՛w7fr8YEs*d~(*\*0!PX Eq^0<K5;E5l.Ç<%hH| Kk>1('ZQI1a C&qC5Z tf89; `<=c=K]ЭneLvwH_ J (G^8q P<@5T]#Ua;(uݑczWS6 'bOK~1(|a Rouqm <㻎«k6¤}Lp@0GR; /7 #\}Plx$mv eir_@Y#;5\'JnqDvVoQϵ!pX"{wxWTm 'X-M1q89 yCQP<*T҆ =juBlcVI86iߘѻT彨ƣ !U3GY3DJК!:*BOc33Q11jc%5znC\r}è:$W )/dHBz~LpcJl%qSLSyQ\Л ]89uT_Sy.19FႢ 6:&.Ù{18shcߩSs!w[n_x=l<cDn5-h6&ÅIKPf8V=e)(6 j -F#s' igRkYwLc%#=Ch֋+ L5Ⱥ; :v`dd;Iq}l^s-vc؞ѱ +$%hi =}tj(h%/egiT">&eI4AS}ڠ]}ʐkY]?nˉZp+L&3^Gpo0儽_c'7?͏o;MN1o;{T1}?_\߾tB9{VSgzlLW]߀>7c/)=8ܻR$tk<;}eʵz!7ՠάl neM_W6M-ɛ[0-z)*jktF9yz2|*D}%ܴdܝg.~7zkQ/yOP |>;s`i*v]ߍye,BOjF l T1vM_N4p=3Vrʂ[igƃ؟YGՊg )?Cwɰb{M%lbj%ՔjfpHPԁk-%J2;" 87 LPhLR+wpUXl;,5Cژ`` xI3D%L'$0NY|F9Q8 9Vh3ʱZ,T G(PpmbmVX0T-O aX:&- wU jhAw#iiS!R&iYO2dkD9~RN)A͙|Smb9+E,3J4b6k"}";'YFa`̸ + q IEh+,>q]ɺk7KYVK5/FxsSqGBx_.%0 LA $uy_]EUeSYl`A6Gk <8"u]q(҄򎲪&K!.X~ͷA<Z?L[05=fF ;t#.q7'7BS/[ B}:F} ,iTS.s`\y M}޵q,B,}*~h^Nđ|(dդ,Q)pfMLj)iեBwf}:hIJoëzQ{E1\s7^: <2;2 ߕ Pat[ӗEAf#Tf3Pg6Vlٽ# 諈~ߟCAVTK˗1Pa,'OQ%έ-VW>{^,X>*QUtsώ[{G.ؽA+^Zf +5|}z>ߟ_=+2[i4{C]͌B}w.<&CJׁRf+UkrvՍM* S^G'^]^J?ruQz0NJ̙#4,SZ{pd|Gݫ,'Fo4{{ȏ>{9]F~їh-E]WwG[3 @tێRRsaR!]8K7#ޚwr콅]otug}C5&`~؎JCe&{Bn4=lpMN!=[NBp_'NTdF/&q6 G[c&ԫvva;fVϜ /H;]p?]YROS1ikimˮ F[-o4V')y]+n8y62,PK!Z%ʆL_dAt{VqgcYjF)dMhl[ד8';[D@ܛ$ݢF5<J[ <B)&~+s)4#S  8d#!ݭOR' m &,f}Z)' |lpJZmRJRi^@s qT1TnKx-3%@`Rg < uFt)VV>5Bm9*Ea(]ґWm4Vh))l&.{Dpɿt_AeonPǭnul*+CdvW6u+RPMyu\ش,mDU[E>s!٨ P)`i`x0WA)MWBudB`<;h)ֱNMo4eޝZ^3-OU幹HE'Cu^gѲJ(Kƛc'y=PR ds76:X cwOb^y 'ԅ'6,.|heY"`NOt,:1A";עoz0cS*u9E;5(jdoU'_Jc[ED Tyh[W:c{|N,PJtG{2ۉț7㏣>F-ũbrZjJ:57݈dk$wD MJFnwm ,V6{o y?֙/ ճJUoDJ+:U |cL7J;N߂쌳˜#%Vбd( M6\vRcMĩlG>#9s$Gy2NĈ&rR: P9[Ev쏘F<_AĆ$Nx`u ϯf߾I { ]_}u}! H JSJ;*ޕ;' hZQIڨG|dC2"3r7Ozon\G~0r֊n&j,vMrP+@5ё3ȫWmH')'mh$RjIOTjOLX]9޺ 1+xn~wg~J>«אd仛llrJn#g pƯ9? 3}Mu ³6hydR㓅F ,yȸ8o.߼~MBN钒[!#΄. f>Q$jD뜆Eߦ( ՚VtU[\ʫ\4\[|ʌ,-kF4ieΟ/ w¬eN&?q{'oɫޗO~o)~:xQjRR맏.0*AJ[;~1~+^Yn?wt)n8,ˍ> _li=HȎÜSݖbǫ{ssȣ{O!?zװbJտ-~_;jN€C*MzO `t'|A1a^nu[0R7s!UGcu2k iB&ժL:ghKwb'3AjpZDɴJzKmFW$%pߕ{%c~<%6NZCFZ&b.!1mF(dL-QYl[c]+XE>\+Xd-F}`30:(ZՉ^LQَz}fղWWo2_d"Szu^U& ~|#ӵo Q !߸ҩkS]X{-ѺA}G|WGn nCh7tj/T=mmCѺA}G(&ֺ !߸SXj>>8x>CZ78EcE{?.>>sg%Vf`G Y,[>ZK ;ٻtҪ6K[N`r*(X=[Z>2$!Ped"e@hmL 4lhb0,]AmKIPr-Mt *BA\Yw<^).m뵩DmIXP=@N`k*EؓErSQm&/ZKRi>o /[KRT0U6%ӔBKzcn~oAtht}U'@tW=񻢏}@;ãcϟZ|5:9Eܔ|D$:]݇YS]R}׶;ފG $1"0ԨSWf#,WF %#gsȍj[hʺ m B9%"h`f[r`⵷I dmУoB) #f!wNRgO1Rƣ%VĦ<8L蓉JynjYϲVx[^>SeƑDmvHN9[,1Mng#@`%S-ɶ?(.hE:X,R>SGqp j}Y[k&|2lBnIcL@SZEfT5uC:}&kIdK¬(d"6L01"dkG]:+ 5Ov$$A' ^6AңX/xyq)<9"HEֳ@</q}RX'4&uZ N񭗬Pٳ<7gCj8~qysw}A6n43ʂɲ"DMTb\Ϳ}⼰fgE^+@OAe 5>aO%>\$zwz43 z<%FSs3Vk)4b UvTcҒ2yyyKifd慼<:Lq,kBz&Ss4:dIRX1̣yU@1ltQ&[`Y̶hf sc#-i1ͽv/VeQ3Q%Jeq tAod"b)JMRIFT|T%p^pUwFR͡9k<ǨBSRA33Kf(c!E0hGZB9qY8RT3v.Dz@a ^yL!Q0gZD(hJ(GNx >PHedrdHxhqwa7O7ODzѴDt9#IcbN9Ӽsp"l )TWl¶@p".\RswʭwYJi>^XuVRA<~V^@U_+V j4Vo*!ڭ NJKɽ^vQNE(JKwFu,Zy(Q20PZdZk}ݎrK<7ҾNR9÷+%03d%}e/44bF-(ʅ)ݲ^3ڽ@WH^c&bj$z>&B#m@%&i "H!Sp+l)Щa c^@1 +grqIn~HUNqDVIbG&TBI`+H6H[jFdB 9k:CCHFjD`r'5)'/)UhH0{i#.e1b2b!h#PzC;K))̉RpMQ?4.LYwYyk)41}j3OI>_FUgwlW"Q 9W\vN+zga̗&`KGX&-qNUI)¸iXưw;#8S" r5ft/UՒ/f6LlױKP߯Jޱ"W)[_% 9&>NcZgsI8l(PZ_ ;j~>{ZzSr/>W4S+$eX<a7¯Ȗ{olf>{u|t:9l9+zl!D c}بO?Vד:o2FA4ep}w]UYٹڽ7/~d=ny'O^p[$&2} NgG/$zTZr_`9G7 9`b4)3VJ-w)\NK!)Wh (EƜd %d 7IT} tSix;;jX-׳XgR pw/_n;;+"T]LίޜS6 n &^_%yRTQj|=HIa$\alFI:i7l@v_:-GoO|*L?K~zMA q7k)>{VGw8ql{)'[FBQ}dA_ϧ<fm2;g'3_2 '})8?v$g4q>uBg]ms~͇C.dߖݝPR"I׃lZhvmޖʌmЮŦSnyϯ^~21RȷmϠ\{Ftq?$옾rO7'U8Jq3M6\7 \`6YR֐ў>>oVTi!O}7돥EV;Z-oس^v)ٮ|vOQOg:(3/T{"2.>__{<PN8|yӨJs#9yd-(_dʂḇ[8Ul3gE\ /<.M)~(z0kM{A06_tc2eOU)VXboV}3B8:k\v` +ȔH&j"cH:m7Ȳ* ,p>ٿ ?[{1[Pxi6E/fj1C@ ..Ծ O;g(~&,v6F5a\KV!رLRJe!d4rȵT3&!Y6lYrтS`OD NyN('FzĜlpTkaAGp LK19f Ƀu@Js<$GHtNrǾ1d 0gK`t,`.$z]C*Ts#P|dyF 2`j,Ei9deڱ\NsUW6Tybϑ h/rTHhrV >#ɖD ȭڇQs6&*84h!.@Y])GrӛRR&ro×7O2Q4P+ѿsdϟ%&YihUG?o8pk_~Z/7Jzuu|LƺPxIɷ!g$\$ `vxp< }o<T*.>45'`8JJybZ0S yղUܻa5&C J5&"Q%eX P} T@a Ć-X qKZS6H6J i;~%ppW5Ӌ82q(2p6:EN+$ܐ@)ؑS= (;EQȣdMѡRX?lIHVHfҖ{#5]QV}"z)0( C d& ה'dyoG@Rm* ]~0цm'^di}ZJJg ԧ4,'P^P-%;R  OX0.MijP0,9),-4TA(Ԣ@q6g׮ udp}w#Zvd";7tp=_43FI/boԈK$gZ7vaBx7 'cYо'xn(rS )ſ[:p֌vY>pxt(|O J 9eh3/Xe'2t0w\z /eoe'ϫԐ (m88~]nQU3 so b_wCVn ʃ 3_2Ipf83O ֗^p/-(~KΧ/Z`?bǻH 4?_Yf6XEEc~PL+۵ j =k!%cOZxۅOt5#/]^u))E@hxr7Y|MJ ݳ8Yzn_=*O w< (̢a=/K ׳\h3K ,kr5OdxϿu,5.!\Exkmf7.~yrwM/Zeu152ɔ{t}7_W>|~sq~΅3/Ht L_b'fĢ6uC[fgښE;1`n[M(G$&pJ;8Zg9Hyv1h;_fV{)1K HeMygEvy4znV=̀!¤Cp*.rosQ$@cLs=x-rG|D;MpAD;#s3[AL߼.*l4dMт gM+؎=t $~& D@uh,6kAYm%BEI^*{^+\FNNivdNNsON^1#b/h 4xW{Nl@Kt,jfp/ O )e#6Y*k5#lзb+y;NLNvth;MBn\}]-ӊ˳4VxvwAԃ$ĺkOv6gbq?C?-by8zhTIs}U11 Ȑi{KDL)֖*x/B:)-KY:cVv3\tZnC5\4{l^f]锛,BhX-f=.+޿hكʱ$헊ea3Rӑkjo>lGkq]n8Z!^ԇ8rݍAuBE iDiܜEdTq1U#Bq, ҌT INdvpMM[o*gmJh`;ƅBPv%2)MYTpDdZI.\Ț%rJ^+VW{SL6/nffy2SΕcJC- 0}m p쁬s1psfe7:Y&H耱,&UCdܠ|kT wM"h'&I ,ָ^ m$؃WYwȪ3x߂2Ӿܷ TtqZR Şr$!':]p.0AŞ+NR%G}tm]IfӃvd-XOѦpYXbrˬ09ܦb˜m}M,Kwc0dUym0(toVы$\~ک o"{Tys'O+L` WN:Bqs;9wOq*:?@qN, K[9J߉77ͯBya@:O0@Z~vɢxtM lu>-j,YYnjy'VJn-fd=X=ʣ#uj-v[(q ON*g>M{o "%a"ZZI<XcIL:=M^[fcw>;j&5-0 O=y\uMJ }Zd6ۓ$,$N'ƢRIew8^\Rh<:'CAឋNCHRnu(nj+Qz^ oDHӵsG"uPzf`u_gYHϦ>6>)۟njjp;nIEQlgQ:)tx8Z 6Ҳo.$s}w{{jt}_t1Z" mwl5-FJ/;G/io{|w%29\K%_5D}9 oG~)(AGɪXeȳQ)ѱ(0}t9oM5W5cC4|1{VCmEis7ˢIԶXo{Hમ« .%l-ú7˵`ҥē`(ȺyP{jZ:904A3*HEccP&kvC /l׌ 6VV3:">2p8t ta=e' ~` _dAGwd\gbkEo Natd%'a>i0`5v\BYywp&C)" [H[DA]-vZYr4v&E\iSgkR_љGnҠdJʹLh/v&d- K|CWѤčFZ^P36g"uύiv~f:)YkC tμt5о#5ݕ[LzV 9ggOZ&#Dà P;P=)'udϰ&ŗ_ .nst[O;jn9m|6ְ OkWKWe+<["9o"6PIX#g HRִr ޗՈoc.6)x}h`ӓNL`@Z'8dwCȪqTOo8ơrNKg k]sFC 4V'c.JIɕ< F=\"{q_Ux e-(ɍTJ"ɼK_2Ey3Q[QҕQs>1nuL|:D-4YB`MDr$>5LN 2d/]swaK{]7Flzw*V҆KcIcf pUK6mT+Ri]VjsԖa^j{ߥ+ LQc6P^PX+ƌ1FIxgTOfp>Gm=m,Y#dm(k=ĺ=j׿^y9׿@+%W)zg S d|>!Jv?Pɝ}kO\| {Mfy{|Pζ3s4`oO 9Y9> wg'b q ǽCj}Y%5$UUu.G:8e,vanX q}l~4>4@VA*ގ8pD8؀|ca1/嚍Pī1qPvag@5o-mW0:"}*or,=^ (x_$z+pu\o*0T;]-1T\DPq+OÉv:JhkCInd%f;vX;]OgߗgecJlnvy K#Sv}w%o% pǐg7=5A{e5Gp]eG.J.BkQcufM]/xg1z@]&鴹UM'$+= [}`g/aؖ0vl-v`EVBx$hGpOvSi{&d79B#z'?Ѓ$cimyV#h}RCG6[8_#i67hGŴY)Q,DF6qY:k]Xݺ&~$a x{Us"=8{;DU w|[UJWCA 7)Jf6;l#mcc|YTN"zED7;Ŏb4%_{ϰXʞ-&J̘qnZ]y qQ^h[e{˳ghi?M'mE7)!sfzwKL/1_D}pEkeT^9 itv!?[4O\k}gY crv,$b3;M=`l$@e} 5291qnKN RQ02̐Y+\diN'"Uk20AH>) <'{AzH'9EK"ӲdFG׺WYt]|v=X,a"ec2p 8 <0YUx%Jj3/r`3PP ufٻmP]ZYW>[]KU W-*e%tvCղ˄ "sx\4^mŢ '9h\S%\.T C27nڸԌsV6%g;ǀy4t!I^+.Fq2it7"k\1t**Yz:%DJ$Fd rNCcԛ<&Ι EHx]B$Oڪ̴<1EV:I8Q6r'7$MןbvUIG>)'M 5lj,<*W}cJ:<.}&nK8.a~-Hly 3&wZec&>_zkrHh_ڬqv8GOJ-7e`q,9(K,'FK[0l-+}~Jf%;d庒t.(ӏY .DC&<خC0t^ڛr~p>%IGl` b9"-'{Gz[tA.8AlYґFK~3jHd}<ֶbfd 58qGlc 27v:4 mx0~-z M]8RFW ,c#G@6N52wӔ\EqkI4UglQ4A´;IJ"F *r5QoH c^p+ G<0UpL[Ä&.?N ]-G=o 1O |bAV(4v#lDr&ZIk,i 1|::# UZka9A9 J20/Ɉ TLFz^2uO%:%mKk^5gb{SVOO3ST+瓩٭=RuW arV𓷯YAϳk7#~RoY|\&hwrqwrQK{sr?F1X0y&)zvQ*9Q d~Y64%<>-:}L9JB|}|(6[CʟܳMpA;'4s;'@zHmK|2YUY  hQՍ jj5#ecվnmW1f"k+2#H=K^ZO@%T-jG31FKڑ d 8F9< X3-QӞ (R3(ܰ74EےMp lcaƆw̅5x:7-IC5!̉.<# A) z+k>)\֝(*ٱ~s=*@~PSOﯯ\ wy]Ko\=o/g/:aɿ~WRw?Ceݏ{]ͦ05&*ikj[; ݯ]Lpv[׺=hxoL^L(ʵ+i8FJDش.x YkӲ)50hKCiz R2F j qje-Z)!2R"5uS׭d8r SMhÌo'!qoG ;QAD+kUG2ص P)t\0s4Ft?֏ iwWn@0Q1~1'8rP)IQu%Cj86fHm/0 2K3Ұܧ[C ¦K3tW$"~;?"yl5$@\$B-Gy|z$kHXd&+S֕3ETՈՑoܧb/=3RVH0"7 6c|h(fJ*N, f"dN+S!pThZ.M? ^5мC 99F7<^\FMZTTa1s1:-:I(Ѐu;WsVx 55LuW2O(R!T/S< ij۶ G5:jB㸓o֭[l>VޕMq" Z*s8L ͆sFob-ZSdS9Qג{a8i`s]!jQD^ Zѵc˯i-E̊dl7R/M@H#+ jE҃zjFtckTf6+T9i󱌥xȸY,x ,MouPSUEf+mTud% [|"Bĩ|>Q0Dǟ$ tgһ%5x|8^jjL77_-’~3}GKo)O%ŹwxQǦS(N(jm&o<a{sP[O5;s3; 33PsV<tV7,-1M9FN0MbtXk%F0oL\gGRc$=ߝ]t7AA@%{t9^ρ?ahpcD~s}PXѧ*? .Iתukx $[#J]:YAjbjkP5.֞oǐYo~ƕ6~ ~Zd-Koי-{K%6'WᐾꟺMcE;ewTgq]Ѿz沣Ssjۀ#v?Uޞo+=G^BM8] vo|IIY}w5s]kSB4q)43UMuU\T`\1wq7PQ_AE|yH>9 9 \}o"1Nn>&GHb%ۣ׾5T8R*E Q<Jrڠ&̼HSi7!Ud[O0dF㣹^fEѲȪ%*ִtH!^^ aT8y 1S3+s4NyǕ$ątZIAn}ZNtr&'H5M,P#3(FiTm}Z]KXנ61.%w$Hz-k5˺5 Rv^en(P kf!6`ˍ>&#6q kDsE@~EX8sc@̃&0 SJPLn pbew+M^{Vֈ,ʍ W)sҝ↩jp\t9C墳ljp_n\UȾBݣfz!^k}HԆ_,pl^"Zir:D+rVxfvϯID iwMhMU5Vm%k¬,R%XmVuJ q0#7I^~oXhɬ6e4~p겮o/boX⊦.O8mkQCʣ"r8bh.ՊB$ 8/,t܍Ior3PLPz+#Z灬`t]7-6[gV&OFjTnoEִHZ5Yt +Dm s\4ls9Uy5AC:wj M xJ0j@e]yR&Sa:֖ZީKבq(O L%*Y^`l)gi@Us{a-HqO o&Mrs{S9e DZMc93Gjc*Un`>c}wO > \*c}b*IAf* ]9$A;uI$Y#o͇\[A1N.6 Hh>{t7,B#ME<:L*=S f2-). o\ IJ STd.kC~   r+կWKD<]^CIZ4CD4V9( !xtD~4lj0cD h%+ibK k&Vi$B,An`$eAΉ7ǡZ3`˕\߿ LJXdTvuxἮ|GrYG/X9zkp[ض/XTlTKbJҭci\Λ"}d7XH*CQu3 ;3ibhU"P_jhe{9]˻wO2Q)XdIFGp?:AT0+K3n6_Da6|-HUW{V QW]fgb@Q!n-:׭7Ss8#0^]Am@(W=$7ȋBU(cN;ծڛrEp/6ϵFpd# 8:(fi9c6'if2(HD*4jC$z`ʍdna]zhN3=_=014#@j?}:Tol8'Y0)=i&1r{b\0fb1}wmvIF6]/"HX\έN$釓ۍaYԸ~kE ˎAiN-Xpe P~tD5>?rdX!s4>X;yltr߭&ys5Zī8ŷJRۻ҆:# G\k$ɔQNFϳ3hb4[H6$$L$DBi  TLq̔*1\(l*o6wXMek7R@t4, 3AlMEM"{N0 ZA~fti 3nX [վpt u 03i@2˜q.y"P(H L@F*2cXĒ`z>lܼ 2=oMpfI8#{ S/CՀQa]@)ax} @5:E+ʐ{f,L)Je\C^*u]8Z. JmKF55;%Z.1l B6|B=cj4jqyeo05-͓>h $ehU!~qq 5W*EQZvvB,wk]_LP%N*]L?/lؤI=;44t`_x1 ^]+s(fN#xmL pUa5sgQX$ $~aAxHka%yXT4z/s /e#ÒT+$cX\A).{WO5 Э~_/e k.ݥZ\ SC8o䱷oۘC4>:FY95ZtVCnrjUj1mx4Qn'ξA!QmrCv*9oέ%UV>LCb/65mEZ,FK.o%<n%Y};uZ9)C6n!j[zM}Ö 6pHmX L?}㹗5DtҟŲ@-@D4VwvۭdT{ 4 m ]IO㚕S >wGsj.;Jͱ¼OJ̌=&Bh o:Sś-QwڅQK諾ؽIv~+ݑJJɁD NRPVD1y rM5+N'Z!''!. ηdO7a2.(d zK{m@0m8\4ZV!rDYr7x+ qDC/&=6)G89c =]VE5o1 [ <8^"Z.Dw1MUIvU5R/s\Yd-؟ S&QS'Mƪ\Jo>> DEZYR'#ii*FjQ[j;bue0cz:E urMZ BwڷZ6N).WyELa!D$*ҌIXDEQ(H2 D rUK$_}M7L|s| Lz":؀0Ċٺ/>9ޞ*]hLn_|]1f1YAh}o'MW3VCB6߶bLƄWG{?i#@~v!KAM9HΟ>$,*t%rur$aa R>lFw/x.zl/`RnREW8{iuO7l(31ʙG%4wLF:Zߔ= u(x'<ಫ'7_]_ٳt=[1ߧBLI73̡xu^|tΆO5ik\JIUϢ便Jf 뒎5k?+/hADd`ㇿmATjn(S4`kճaze91Q2s>eI?I8$bvc)](H/#LYRs ѢA |g+GogxHI"M6CiNXA [6T݁^+^X0^XZ4q5֩q㈰pt?8RNwk@| Iͬ;ՇใyAO[V|ҋõNb5#$!#*38Aa5 FUU^mjsǹ<_x;J ko\,cy=e);l9|%B T0FHHc'!ch,)58US&DDtIQbE.Pmm[II  CCVpv*%Nrzg۝M|o8SfdgWOe2|;ϧ~z~ P҄ߒϿ<8`+t_ zFP/zp6_C1zȌ¨~͟ % ]r#^+Bncc=;eumqLI4Ic}Y,Iu!(U?t"*'ea` <ڞ Bִw Hq3z.ӓnt=(f5N`=:QL `=`z)VY!D)1{RKU3j c$R 7ΤT*2X5Z)\o\JI=Jq)qkR*쬔 7%ӪԤ\>rkRf;bRzZZS`+RMJsJ1=%f^„cKņYGsͱFɣ8F$5Zy#Qmm+NQPrDNĜ;7[P x֋y%&urnnFj<ؽp ~Љ@(-7E MfqmϖR&Xt/ a: cq"ǥv_+t5ivj3Ұ(y/v xpQ"=̟#iy,/D"sBrPƀ8pM8g\pU]07VDre6ӄ )L0J&ݾb$ĥ7R^:?RY5Su&Tݡ4l]d{ t\]7ЫO^Ty|?@\rK#|RJg ꏫ|˵;ϟ.hn* ?ms1ZatFŰfYw@ќ NL| @k8'Ky<qjXvKꆮ u> [Ylu_¯q эbZR]ɅY7B>S(hp.#P=I[N6:'[l7+UC_{פse*7pI^NVbmnH bJH(@(CN$MS%GRqɐ% 8W< R*#H, @;lw[oꙖ^jp,?qĴz(Dl| 0 *TD=2DL ן_gk $]&ZCLӹFSm52 曯 3wg)8ck(/yUmy1%'-t7MR)ߖ]XX ] c |ӕ׾<ʫ@x((kSׅv8ب\@m3)^^pʭ >kM (q~hyN1~#N`T[sE>+t8&z918f/ r|M竽`#QmKLsլk{h !an>rBYa\.TIVxB.嵙k߯5(A)pOOS{B|sIpRofJޱ9Za6=V";-7<4VUאsN0RSÂ)nt8j If$ %;-|I})[vO8&F|"q$Q$ELbDA"Q 0B4!$D!ŧQh-UXnLR\AjXP!()I Q&2{.f 6" "HQJI,Ap`x 1M#B"+($Bi( V`Ke:6mzSǹCDSif+Gb @P]T0"[nGtJq*2aD)avD9)jڣ@tLJLTӪ4I9 Ru!T H\U]3ԡҦEK$ޙ;\+Dz>&g,NRAN3o"$YJH5Q#%غL>@z0sA- LFvXI@*aTx=>AS X"@JFr8zeH ȪzZm1{R+1ж]7mXV6zpgY,79d)/$~k*[H)ơ`o1E@WVz@,{@0-?w@կSM>N2pQ!\Nh1s[uQ.r{6*3A,bUG$Ø0ܸjg/S[QNG%Hz^W*G5WNk"RxqJ,Rw GW}`ڰx~lϗOG]:7pm~:lԪK 1M&@+qR?.} yw*4]\^eμ o\|1/V&tIWBu?ewmyAhdn`XK"2y^E^ߏy]V?NV9z~ySihnWVqYwX ߱e(3*H/Vkoi) BJK?o9Eo>u&Ϯ U4^U70_u % y&Ȧ~njpAT6.r/i馢k<һa!Dޯ8[ޭlL;\%w,k֜-һa!D+6eK-+>@%7az^!9:*NE~,ox9R>-]OXyBI˶|0EΦQzPP FI n3/uQt֋J#\ʇq|t9|xZ<}\E<ȴwcc@児Gbl&gIh)klb}w$'8q~?W-aZOSm:;iO%;m>S-I֤D&^[RQJu$FJT݆fY4VU5եitk5H@qlǏ87l*~yM;.*,GϏvƙQ6QMki>)=ذ``l^o4BRih֚|~LBb>Dj5ոP!!pyH5AB% 9 5V!q sgk[)<|g vrNN0Z|*#CAf Lm +E4U9c}t~@\4'Q&'\/d?JKx146_SsCs5vNWD6ji;q!ip F]Hhb<%fJ&w*8xZy8/3zcX Dw̵?}=_^Ey!UDYmtߒ,s?#(ceBNDq()8 T:XE+<>('g'Q yy-hs"gVDо׮}$l䱷'd%' '^]pZbQ/k`?Fm*i7:5XGEp|F +MVYl~jߌϯgc.pB=JCA*_SSzsk|Yz?s^/no14aN"o$}|[b\f|+a>jc[Qk0lASWraNh)̴C6JfG]T Eai,F(K!q 0pwHkcS7gbiE=`YЄ ~{3bL1&q *N#Bi#S0\` ,I=J't6\W=sQlggVJVzLiUjJJ/eQgYRXJJ61+5M$FH`ǥV`kR;+HScROR R[XR' U0ΟvsS0{;fe0Qj+XIW CyDHgQvMv7z= 7E})9YE8㔒ۇ$Sd}h׋9؎9"6qvDvNIjxu0a<1UvDG!R*۟_Uf 9TFxIMΤu;n;6^P1J q9qwkR쬔)s 8=.tWj#>X[0Xy./I(=RORK_zV JAd^{zZRJJ.D~eD3, ؚ˟ͼJV7ib2 pQLB"Ed01FBiĬYT%φ)-ч)CʙGKCFn!ѧ=td൥>1'nЬCL1?L_ Dž~ FD~~j\ ~|ϵ`hZbR>AR @B$Fc>a}Խb.s<+C:&&eyoH?OTKwp;Hv+ZzY9B˱?LJ5Kkt֋K+ ~"3Nk8J7^́n$<#?H;uCWPJ6ma@#h"яzZjF-u`퍥6C9j{ݦ:*Y*M ک}.P{נs_Ǖ}@Ay(moNROR2h gyNMИ|c1{= l舲(a+yIgbNP2/X\V.l8JslʋuܖM0UvJ)k&vS:ٗ(|=jfK&=۔qSԘ$KFXq@"ٸ זnPyumԾV" 3uj-=/jj`M,FLJr9j/L./6<&A O07*&$a|*[aHZAHYW܊<~*+x!w=nF%$Y|,X&_lm++ =_-LKjiOXU$aM.Y"rS!uh07ڥ~m!"yR]H!VA]CȬS4ղ[hh[%iT-E{E)ۤnԺR)P ;]݆hrkIA6I1-sJ{3:@U)v6OJokVN6L즜F^K5ݾ2RRSK$sdؘKUgH0¸B\xӠUf {}|x:TU:RݥgՙH^ZEcA-2:V,Q0GKw;~Ho|ZTkCEs|Y]lYiwV^ujvЉ\艉I}u+濄, 4Ovyb'k=ɻ[SG`pZx*&fBxrvuR=Y)Nf5.6uRwc5V%1]DEή_m$rutEMq)Z1kiL0ߘ,"Br<oV`NANX뜒W: ={(Ӱ&GۓΊ!#Vm_ eQ1 5ka]8:.kujØƋ(!Tjeu^ex [YMq::&_$VYmSdLߞpiKIbdE5!K 'U*᪁iZ?8%'A?K@Ja3πu8SRV0EvqmYFlF9Π3CC&G Ĉ,!LC?Q:*ᲂ -%(NkۿNXFP@Np/@"C[m\.IΤ4΁a&Vر\*oJZ%k0D a?] 5\,)PԄ-(8ݚf{u+iLj$A唞oCWkDG \%A zmb9YQrG?sgr3;ʩ +lI4] 6EWYxIMPv e5f+UqUK:8qu b'#'.)+잍>A v@p:\s-vTpQP<+E (Ts|:A/5xu"0gAH'tXxfJuڪFM϶Q5fq#jõ;Ԅ3Ad%g][y)W}D?'5JBVc=w_WOBJ-J&qů4ǡ:SYJBBXۊS$!y,\5AԨkB /fߖHP ayʫ 4oj8CM(7re<p2 p3dO-^It9ū@QQ.dZskKC5\H@*DȘRo~!Dh3HsKEfp'r+^+Uk@sJUX0\ MJ`Χm֯} ׶vTrv_j}߼җ$S|nK9=+c 7b tr9g;1B]PvO;`g=X"䃏`lYp5/n0;3NFz}|Xs0~@ˑ?_+i< igx0b:7]v4&Hn!<)_/V0_mpkDnkMFλ˯Vꏣu =>ل-ig ?\ӽXݍGUF91Dh6ykΰ8dzGkԛ6,g3т[A8~40 cS>w;п1% w6bMf ) u҇:K;kܲFgV.~Y ^Dz*, '@z5 vt3T;Q5#(ޯ^|ibqI;bEy)oLJi`$k7$,m]kۦϱ2qHrB 7\BݟT37\ [h e.rI󳗓DԁefDN\RuKĞVWjLA_v;*v*J6Xzxy+™oOt[;yOh&VĊNZw=z*._ UfU'4Bue ɬx-QC'[yKZN8҄5&|>?$VzAyg~\\Q:*(ѢMJj_'u4ʰ\]IT >[K7aKQ~dp~19lD$!P:-`偔bO@QRRvyemD]=ۓ-#RL|6v6hNx}ឍMC"A\ra:q꣧ѤL=R_.8^u[qE#3{z1fo/Lo6@}+~ߣ{+-I.${{"-6]͖pd2FTç=;rQya>ͿF~~3ïy&=Q3%#!csMjMyq "ӻ;/E*xfrև|N bT4y1%Ӡ+njAwšѴ#?AӞV8+ZHSfVL;"s!JnGtuHkJ|&,rh˟oo Yh䚷L0+GP9Xѩs2q&Am \IL4MwtjiJs7N%r&InĦH(u O'^OРH{WON9~S2j"̱yI xυU|0>g 8evNn=Pž!PԐXw3LBR Qs0)Skr1pNe5U_nx}/[($^!+lzhfl@~kT١}s+ZFjQ4z_4EoBB?bݪ(^O\KsyTH$.Ty9= "ƻ͉ XZplfkl*9)i=nP4j]O?6znE(,Q35ahÚd=T׿]^j.[opP,} ./;ݟ9 35,{U!WHS^k OH?N(*DMEM;Q,ݽ$%ЊtPm`3vw^iXijEWnje%Ex4i}y@5Yv*V:җ'F☞P$gvӻ+'R :Evre&+%TNRB< ͈Zρx<+BåA@E;Qtƶ#IK7ߚ@J{A|y `Q$k<)2@ sky\e4S8"\YIDLb9+NA.¶vOq,1(EeG %gy[S>'-NCc )n [ \%%FKb$I48E\4 H"; hb7u_N98J)>[bT].]ڑ/&yP@$8:յ8pJ.IG/6i L™JbakE,iMD?o_1J ~_G1Y̦jdpcN4?DECe4ӛ\gcsL˴ϨY)~+^bzb UxBGx ;GO_OoYCꛈ:s(Ɲ.74".x.A2T&h9 Z(J.d7( 0$#Lֺ8Oۛ?22ɭ6YƼ %G(G3lx_O=ORw+F,C6QDslԊR3Ddueڊ̃q:!Xm| Jq,SM`Jg{ފmZhʟVlKcO.7a`x)29FcPW" bMi*ړ d uj#UjC&ZzHVӭ).dHUQgxulk ArbF z;Vqc:\ rZŭsg(Έ~BUşGy ʫws,^nUjjyZS:4wR/7A0upQ|ԅK]oGW} &g# c6cJ䑔pUPp4$%XTs~@fv3kGs5y{DeWsjTA44f2u̘2QL EWpNqSxMC0's)6Ж1s<\RI{p,`>h)`7k'{i7I (URUmixmp⁔Ju7-Oe9j`[͊w|klMKT"O>1QBj‰x_8c74Yڒe>3ZG]x}U@$WOgx_aV7ƢCB&石|S 'a;N5Ɨ$jYI}Q6g%i,t 0k" ,Wۡl1g7sc[r@r-%ŰPj k)UP,1Ȉ;c72paЮ",;cF9k_7n4kIi3#ES ֘-ěM*2 Vf!WNqsS}wM/~OY{8>hiMeh;dCt'vUBJ9SlP42QYj`u\W%;bjŸ콮g&iQ"c\޸I^:5@o'轟@ ByR kŠnVwOKNբD5hR~gN#f&2[-'e][ZX-"Ӝ_N{9MXs'6U| X ?\#0<.rKxVgiwW dLFs~  s^Ǜ_n )O@sKʻ{d21_~חaE5n_ցPb@Pu#rB Q̒1!6-i>BrZjDyЅC|mBТCmPbt(֓VSG6+8aݨ:|M-Bw"+.^P qe$I#&be8 CYb-8IcGq0ZQ[_K6Jfg~|!6ְ(L⃙ƤX oػ5B \s(59z;ښZIh8|{Gh?~t~8,7gljKu)%^ޖpnb6 d,~ioJ׺⧚ދ~`g^`d_a׳7-`LR,zR Yt ;Lixmj}3b_qSĴ]WTf2!ϢZQEIjT{P\咋xV#&WWBC'U^U:y2leJ ڑyDZN`,1Q3ǣܤVOIWRVAz+i,ѦzʼnpruIaT`L5Lb¤! E:۸Qk-(Q}OBs #dЖZTՖfV Ǒy4C5 d1H4PL%YHK)e!Q$LHF#!22LHp@k+f7g8O`6Fɐ"z=!]\\"ϑpR1Aiǰ`iXLbTet~#+IPhG8N0욘v{f۵<ܤ7&]ΕYGDy=In󎦂or7wzIRs_w4g@("YUjmvml"t8( wI=[7[HyIu14lԦBIb2כo#A6ɔ{EHGDyN.&gjgĘpKɩŠ93eRbX?NugzMq{gVf\PJcca.ym9܀Q= -57tTY l"Z 9]55دr^ribK D¡ݝ~>fy( l/di68iFUuh1a@c3AP KպVK]O`=uDc%l]J A0Lt{UVRQ6EXFv iռemt޷'\(AK>}Gm/$E мoSMAUFͩf\92γyKb'Rq6\}{ΌZJ9qRL#.mM/ n%C7Zs|ãopR?ivFuhF'Ԗ-g/eA.AS[B}S@VfGlc|xɵ7l{2@kWOg/9pj[~oٗYD!9^Rtw0h`+ .6X(z/4¸&Wc`qV{S*Կ&n~ .oMi/x*_l,"sXsqySaS܈_xS. ƍ>CMKq`*LPɇ=k(epjQTLTL;v DžYMy[mZfdݗ%* ϧ,_K;+~ rܚZ"W 'C UjjR[zvv9ZMӶK-_k`Zlux`C>tYlZKJY:d Oa!=X)[,~HD)Bdq&cf"dd! eS$`?퇉̈lɔ 8²1P YJR̢$ Y$F8Aa"9%2[ij(1WY©l)%JtY;Gu ( 'S8Nz-f(`"T)Ƀ~P; 9YZT HB€]A LJZdY\si' Q:x(C I0ɰYbCIbax2MS6C/7]:H|$1of:X//J||oV||by=hzH& n,|iֽݭe'?Sњ;mQ:Qt6_+0MoY^Xƌ,㵍$E`ʒ  >Ŋ1լE(=C ( cLf1ظd3W"[q]{f`z)E=z.ȋrh7_/jG7Ch˿G֎u@J?b.|L.N,.5>wR9]]a}LLD2Ji8LшJ2>`ZޓX|ye|FT U?6w8+YUv.*.Vf${ tߏqg?;E&x+ء`G+R<&!z_7ѲB)q.(jn0?YtVsb[ M9)ûN ɞ+`2E0[ yW*r;oBT{~B;漹k Garu gm#G073k|?3Xln$nd}#K=2q-hɲM=-JDfŪbUh2mMW `lKJW7F%QIG)()CIS\yz+Kģ-ݳlJGgßwM:*Ȇ'QDn: ȣ:5*bD6-IrLݶY)妗d=(Ƶ0QH"ڴh;nMnbQN)v%Y\= JdIXpAY: W~{{"CLIQfGXo Ӛ:RyTgs`D0i RSKwTZPm}\2E2ǙQVk >7:Rh9ɩڢقS'OYV؏A}1 BVB5&@ZЮaֳ+l2H&2aHYt&>4ϼg{If+,%x\22ÜWyF"S)8T 5gH3o*Z1WTğ}ߋb%Gum5k/M@oG`H+i6b?sѦ[ӻ:Ly]K-^݆S*z62V)g*$(l"s0]z–+ōǠӄrpYȪ~_&jQp5ҵ&K[P^Wi.ptsCrE&Ukd Ɨp;_[_âC)M1>{]?a_Lq>:ok܂ *_#+)޴[ܴ4 '4_EaZ:]sw5~z y/>I c?|MUd㝩B7~Mw0p# DM2֩ |q`S0Vg[s3rzI@_|a?E=2Ç,{uh0g31$oܙ@|c-1 w)Fח B0ar?Mt؄zn2x=nhRz]Vynv@3XK< \*m?;0U1ItMVAwի//\\__O +h?P_a 7itK~#()H>VK}kZ% Om$Oע:@áwo/!^&Cu?TEw}?bB;JO`UO>*)_fnyy:)&Wx0B&Vu&7|H{- C)fZ9@(Jh#pHiMS( [󸇊Z[_Q(ldՖ,Rߧ D*4k@Ѐr>ZJe SI s.YuS$)Jlurh /rLhQ# Go54UiXʭ؇cAcU%'5eZ8KQp D7B9$e.4D Cp…wVr09Bk'9-cn6x9|jNR do`1&.7Y0׀%N5j]s2 cR:"7>yCHO_W'cKޏ~``(7]i82 /^*z:o˕n;1<}vs7n0ZjPL vѿd"M\>E1O`x }q-d V;! N])'#&6FiyTCLP_܁ZKr\6zL`  K M56Ee!4.-z\t! .e:K#Xq¥b|ahcHd*2$*%9GkSN]{ɵuD2 Q9ByRg4X 3l4E`v &A!>(Ae?ywmq`+srrXz+ƫ`BqEC1$T!˺ԙGԝ@s%5y︦bTUjפt<_fT?ɥ A%o>l\CAox#IXTE(>dGjV:UQ[;() RxJa]0B7_oPiķ\RrQIx] m%8/HhEnm@L^e {3u-nUka q"J-4σ3spO~`<;D6M%:5Ysi[mc0&?`އa?G;y=5L28.,Ȕ?I؎Od'N)$bgw?l J6PdJjh&O*^t", lҍ(^۔"k+{EH IE~bR{/PƹH<8S;^[pBL_U;g E6sIvse22#Wk!PFq!}Ӛ(30$T类9T ^v?Gc\ Yhu7}T M DbV=UMs.d8:@% -hi8JU9`"Q*<ܨ,!0ш0cXf2mƌ5K5Rc%$ 4ݪyHhA~; čqTgij6(#I3\([q&%Ff`|R͙Re;$ b+/0G:rXwܚ^ąR `T6$\eRr+Rw$0q%Iud SAV cI.LxN0|n%,?/uFɈNɰ]/9@J=>!lr9oh,$YA|?Fyql9#>]&kWkgk-M\hEwC+ 4Zh ` /n޴)$-4^e lM/O5DQn"YK~4)&}Em.DS} W;`'5O|jnSͭVYdž"fp i(JԧvgM%nCJx2"8׈2AS!UBI)JJT8eA"A K)NG]ة@QLkࡨ+R"?/&Y$!ѷyCaŖ쿬owK7 RTzqWъ6QWz{TnճuC-+CRW;Շ Ut38N 4*sH݃Yg:'uh'R?Zs*[)H(?-E E^%RI-EQ}|uD{q dT(^H(FEV=ו&PCQ[V0'MϳQƟ>?!m E47{OǍ_i-K5u@ѵ-)g1}ɪT}j ZwXx rig f\iӚjkV]޵mWj&A3TH[i4InO*« /ʿ(r!HZA쾭[a(d(KH("(&b ~%$ix[5JrkCN[ r?oJ8ii$Vjr0T*P$bǸAsA2<iT$lkkj$Ps}TӘ+RNaRױ1#YcQK"Ė$>ߙK4M8UҘ^ۋTęAڋߺ+@ߋKЏ Z[I/yj<zvk:*f1Ŋ3% +T#9̝58skRIZlRQN?7uS!0w$$ƶdhTZ M5*.n2䇵;3Aknaw՗.[VL83R  %@j(*O-ȻRk|u+@! Ƽ^P!u d PrKs v?sI_ExA]Hbl~X>YVל [ٕHЙk!I6'P3Jj!]Z{`01=i DO7Т%AMH,$鎴5X!5sUg?hQポ&{^vƂGxAװM+.z6MO:)ѭl-55P[9yu\ .>SU 'B[eaNb<l6"ξMqz9F#; K9+ݾ} @:[22 q(^n34L(J VL6=c4)i exr KըߟE%x+IE$v-l+AݸA &j3z+)K_g=MnwjepΊ;w;9-^]ƧWzq :N^AS^gZK})`DWTv T#.ҴQFqVO+Qyr0d T  PFSoC63-=##s.3tQ vt]Ali^zB"Ȱ4F^NDBPn5 8|Ʒ]RJKYs e$P#rfkٙM7瓳U_ ( >Y2;j#!H==E0WؗqlnһKl~nnbJ,I-I&2U XXn: ݚ DtvTݚ#j6$䍋h#fNu&qw4諸t)Je]kO[ELכ `(V_ FK 8j;O?NEr+de8:;%U0uW$H`3fmT@Dj4C(UWke^\,ՠIÎ]!OؒX]yH\QO(+J hf2[&Dw+ʾEČ1c;S56YWa692w1SZWxKLFMS)">ME$FQ݄V 洸&Uq"SYԽJYD)u/X&dwB+$"1nqEj4¶rdDv\cveF7Ttɍ;Eri8Mt'I%ZD'ݔ_8ӢOK/Cr7Y^q|M:={ox\4Mk@dUcAo$.ּY&9ˆM 4#9e6]Z$didM¯sr(_\Qxȋ\+\GjD.: NJ37Bz3VgP$ 1<қon| Aד庡u#L96Ӥ$#edb U &c,ISRƒZgp{TL6EՒ˫+od{mX$S5O&ؘX8b$c8_f`~u08t1)9wwE䳔}>ɷ\$Nϣ_(cyWeΟl"y/{͂%b/ > |deJ&T]\7gBU%V9{7>pSԠT5E5D U&:qLjK6R OX lEn]_aB vV>XlT> )*K'RzR*L 3SƔ+]P/C9T:6H_-M7|j8JAK)0)ESfR2LJ uaK)ԥτV Pipuɜi[J_b=_֔UhrXd9Iq,?̣9Ͱ.{/qQ:yݧ$rs'a:KT-$lמ:&,)AǒQ6YʬF,MwI(l` hKUK# -N)BcJ;I;8C8Q) d)ݼ<>b3ChXNzHSJbV`rW (Ppοm SrGc8=l8lJ lzgESO۽,qM!#~oWPT7vҜHI?;b!vljbv%14Tnoΰr !o;2"ðGըbΆ$Ъ?觉}}X]ZKQkV+* ߷Aoo;kv:Q`߸<8׆QBe"/fΈtq_^h324g'ᨗĠAeh@Č&Cq9cҀnmw<#D7g{АG3 8A 6EZa0c.!<9N8(Q4rQdT3<ȸ\!l8X!%b!H!$(YbIeB)K)TjFR:Do)r@q^nL(IE̹B&'CrKȂ+ʸ uTۊ n$FFb>[^ԭj9sPk5I(0Pʖ[5ݞ[]58cmeW v>ey܆%a{uAŪɟ3dn.篦6>&Nww߭`ݶHqJLubeT~0șu7Ndr1q$!s^v2j+x."g'_/|p'cOqՓ0x}+||z; x'9 Esk\߽;AN´lA?O?w'a㯓p-MY^~2Ч۟N@BL"#$w}=.͍{Gl<'AVgf60HM>?sAM5^+@'G6t+vnWAiبv.WN@t3^m^Txnڴ6amkVP7zFC!9D>v@8l;R*hӢR%XfưS6 RRnyz"^1¨-]:۶y}* E͸X׳96;xӮ_..5!Xmc"2 ꫛBqoBZ)d6ZFl3>2k"Mުg#i[A{X[$l%x6bw.nو8923sii@^c14z 2Ծ9!۾l<;:YZ !YZ]Q@w xvvE]cP3x=BlhGzx%蚙sCGyb7<ptIyܵ@L_J(("O1@ plRϘR.TG '>_ڝǩ-`¤AIQH)0)͡4G鰥1LJg`ܕѨǠ^^^;J&Il4KL)aT+e-%2CKJ5L*)6GN9(;m:Ǫv'91 @-SO>fQ:kcٌtLZ.~62ۜʬYŅRs~nei|8\rIoyQ֕Sme!RNh8,=cZ7;nrIgAEއ?i|"?4d< iš?79 C#w%UGm`{\/fGo^Gq>-N;:jk=g,B? ]ks6YrbX+ŝR=ャ.0з[EǸhMzkxLFi-YECC1b4s,D+d@mt65!ʋ(ybG`c.Hϔ*%_(-,\LN܏D؇u/q!U"/$bf> ΢ A>۞b0VI~q|* `$#>#*UVr!#BF8^,}$@39"Usy%%&2e"% (=׻X !&Kh4j9&dD*2e4R#.-@*G; }9g|d7 O +'& 'Uz{V 9FxV#(*TiV ]-%z+yfDVK;!C o;''y"xR LK2'(CY){q. 1h8,*Ek.;*.TR8e%RY?P Ջ~_s=Sk$XDZf9/K>H"U# GubhAvBw(;m }@=[{8Qpԑlb[?&;L 0Dv֜$*u;lxCm8ESRrюRMokPx"  {kJV_wd`TQ+GE#SuM(Jent\2kX4KևE:ķYHjZ@Ɗup8Qk=,saضb{8Dz?_< fc{g׵3$IhVZA0_E !r.vͫ҂'a1"$|o7Oww 4\;Bx[or:NTsBt@uaUGsHZP ɪklRmZJ T 0H$*H\䴨ޑŸ}Bا @ <YH-Oy@ror~V럡w X0DO{pF D)^ -+I1aI0A$yjE<]Mf;l.oyy.~ p[NB:Rscdv&-nOPhI -I^kuH[!{7PGA#x%gĀ\"C^q).BO>=B6] Ch^13aNHөO`M -""TPA;9mHtڵkUvfЋkc9q(\G 73NA' ]"Ҝ qYη!b4nJy/|5CF8ƻT(cs\n^X0<2@h&f\tq,yzkV @$dPMފ"!w0X`(D*JHW]!rOf bChw65-/r~`k"W&/kjlNuyZ9/]eɩK{{ռ&Yb n5s`Y$E%A]<3DCQ[]qyihQk\m'1o.ݩ>e =$ftK3LʖI|.Yhgav]I![-Z(ҢQ&(FN9Mca\KA0h@|RTL]i!m!k{|U&d&ӻT֚|,GK<26̅AH#w $y5e$ٲs@xќZ}5O-]:@/Z,Ja$Cn!DL瘥@sM\20!ɢ5.ǠAJ<*{Y=]}҇^:#rm q%iT%uIfseVw }Cz,Gxv}* o77'/qJp[ٗ6}eɂ^׍p5AaIbL!)[؞^_?)ej5QA]LgYεoj}A\Y;Ǯ~/|A/!xݳmU8X~m]X;w VyAEꊹƓ.x6ZA<5-j8,R[Ԡ+M{D(0.CU,KoDJ:{zB\x)SO(ށ*חY(H>Cy'+/~]F1,$Z3W;!V_xQq>CU: UG7L\`(%Y{6uݷ VL IR֤Uj!^ ұ`gHe\q*GyN eplэ5ȚI߽IkqrsVAE5JÜi?L-HAMCO Ҳ/[? aJR跏Oz>iS{@~ r:ʃJ;PF~~wzQߦxv7^|,Nꍿ-u%2[4xtTG!ώ(qa_.>U_lk:lvWJ-oʥ㴦E JJyz(L 7I_˻Yk26:`jʋ(ybG`@.[i)ϵK*kL>ȇe}jӥD.iVZ F)jxfF >QBa3%vD,õ2O& fche_=s9 )=@bx|9oXwR!-Ցf\!'{H% h܋>ڸGɃB#jpNf|9;X,_q1=<>9]jJ0A3¨(@zǍ8n :AtF=o0ڳv7C۫P*8BLRad'M% bt zl֝ݷΨ `͚&8C:?ϵUTJT:AĜ%+Ϟ ]g4i# !tKMގzٻFtܫ~T?J!.1F?%(RKjW辶^C8q pf~ꏚ^xr#]Olyz侷odsǟD8?Q_'7K/`7^IW5:,nTV>Zhwq0-ukq%,AlĦObhyGziKJūȝA`!}9J e%0o|£2^2\Z|:h `V7&`Z!wzpe_+'3rO1i]rɢ,rqkA͡GOHGU#O`Ѱ/ ʕ$݌9o7uݱ@ev|۩u,oru;Kg54:(Zf%j4QUֹbܧ[p R 戧uRrҏh h _}|L=vە -G -vnnpUhʼnIl@/Yh~|M<1@:k G+7Ɂ7?,F;L]i67מMpbxh#17##J`ì9-qݶ= WEFmwG5kT l[us{YðM6I-Cb( |HLq)-!x<nD.b7utGn^ s*çJlǧث^K~;be>Fk3L?Ͽ<ޘ`k=wq׊Jŭe{c>̍D+9000L7&L\/+Si7a. WCu%ӊǁ_󣻏U]@]]\(%ېfuX5 V,1q.pg9`-XY_}6] sP= 4@ZC>-0tEFEB^h7Fox9"8Px<[@I+>KņϺ×V p" Wvٞ~ Vfgp~଺|epӸc~&҃2C u^mgdLfM`"sQ2j^a$Rpt~7ah,^4 &@ ;);ssEcDM"KPy A"se o,'2/4#T1%ٍ=ju# HvVG%^C/BQ"X]n\jBVr^;4{r L8Hq΋h@Ġ"ť/Kq&]qHي!2 R4(7'+ǁ ' o6c{! o8!ym O]7zݎ$~N]QEۜU`=#C;kKTRĨ 0Iek<u2{ -srK-dN`pgB)?!mh3v + ½G[6g"9v};5v"+Y 01_dB^ gW+ d4^A?2z5A ~uE a Bk>@G!bdwpGfǓͣMxI$}苈X[Jު\S3 9rfƚ쥰y4|D:~|Yi,rG Tůowu~)ϳiFXt:0h\2n!{zr6"Qhsy[=_Ak. OeVKowDra/ܼX|ܭ7>??_^6'UExBzشL) zz0Ae4O 6Yvްף) @-$D0PT2+g0f,iP>`e{TWQ oE V )$bTw}Gĵ>!Y˕K6L$$$D4Iѭ볚m/"*$@J&7>elIP|LOsf- [ic\}4xqnf[,ʚ=C%o X)'̧ 1lnQ.S/y2:@v܎N-BRp7 Xɣ܅سvCnC69[ WF]uɱS.bҕr{Mg]tB>%|pd0V|'͐aE,ȹOJ>xtM mJQKݲn1ds}ͻ5coRg-¾9HOks} ?&/Z}="Xփ.=5gl4I]3h&̜P2=gјdPq i, v_0~PI?2_/OكXjuZ˹;O͜+=z I($wf'D{>"cƯ"*gBbVdz}R~ӑAod7d<|pgb`zrYpX-i-ّ* 8]VXulj+z-ai^u\1}n7u\(m;hAsW>~qΣX R"I'׾6,"Ǚ8_R ar4q!|7+oajoKW0+I%BUt\~=\1Jd^xFfB>*?n,Iotd_>Ed>uRLקݟp|ȷ>r\MļcesB9Ԝ=^\ ,S#&eڣd C) ˀ,ޕjeU'P/4CeG2U*v'WY!6 B= ,d2*e1jAA=7,Ghd1\mRZkHB2g%Y"w9k3+\X! iV1cF\v!UDU =6c=ĺXeRd3*y%"x^&zbxQ!y0ZD|Qe>{깷ߴ$Jsr nΠ-Ղ졹].p~(NQ4,e$ Y$Dzv v)hUzvX߀B&O%~j \:9#ai4sHyrbɡ؞'H},7rkTFG,#'J'kB C`ْjd}iVihEvof LoRIʼ"h2MyNAD!=2F9Yäµ 蛟G螢9ny4%:qȘ XrkkP,­Q!=dyN`"uwRQ)7O {t6c:a։ oexf9΀cffI!o'M0^B+ZV2Gu2r;`~rX/č}'")1O|"ʹNȢ:fb(ZZrG:: hE})pd60$vje~0EOcxǁL& lһ)R.i,񸄠r]+`Nl"a2ݏ [̜2hN"Z #vדфS}9~PF:EdF@Y(t~V=5E׿[rX$+o h7'Q HkX墶i3N6nV#>5 )Ml)sU > 6yj,XgxړN( "1߁A|^5M mmmG}n냤A+6psh":v ƅS^M-]U![45V0Tc`DmǗ(eE75I4ΉF7oKJ4Q~גkHwdfbq^jIϛJz&۠Q 6~%M25HS,P.x/Aゝ 3Th6+ I&5}Q  CЎqbvسjIF+4O\+m)qBmc.9! 369ы: z&+j=ALF"$2L$D!8;yFVT7؇ &kE#[֕Wԧ9b7br}wmsrϧ,q9J`8pydKt%, 1yAя̬ Kd)_ 9P~2u(Š }gz8_̀#23b`Oȓr.͌( wGAV]uL@UŝȆ H`XrryݼzmolߔmڸOy ^yvžoO%lñw[Q) a 1y)P+Uq[V'?`G폓§݆׵z Ň;Ż2dl91ݡ:/еNGY3!^s2&./Zu50< \KѠBKyd56;Ԯ#E!h(rshܢ{\3ђ`^{NjmHT:JNZ>H7^xL5Kv)N'D +99o_]M؈ք3R|SQC`(lB˂PVijP{dy`l~?v9mw}֟52,ն͡ "6!JoicH)υ\1֔[&~._Pl(D{ʙB&FWEr*,>t昄,Hbڴ(N~N+"#4( tH0r"[Y j=n u[@_%#]W^=)hfnH;vj*Mv2Hϕ7Y>rCo(MKKCUݭAgh wJCs>t B :w}hKT*bEş_;nbw[rmu?[r8ZuCl&H, Poɥϩ!AB[4uIكxvwJ9Qڀ,X\Eʑteno /R2$`St驳dCk " #H'dE '{j=6 QsZD mb)PJm `s*ZBez%bBX5'58k58tZ J m-i`)`Gl6w 돃ne/8%Pp-19u+=|3A]7)\C犌-TLvlL}0|ebDV5D::ӤE1c5B*n?09-(͝g Ψe~Cy=vxrWl5M#vta03 7ZoɁNm7X_vw7,Tދm0H 8pW̕*^;@[̖f.?4- $#q[߇!= ڬflc~DUoɱ'GnϹ6J9R~/>g\_'&@E&iZ[/Cj1J:넱R/**(׸[J=G?^͛Z6ؚ JKqQ[=ٮT..oJ#lL.X+IfQm,vzK/y/×7Y6սMmߎIE̡6M|_=vke-!Kd i%ѲbRt.K}8hOKyl ӨeV-3Iʤy mg*,e+>},}[1糣ǝvohebةRR;sAQFše5׶֞rئM)6DhToF։h !rvn.:fJ%+>O˷";Fq[Faҫ(SJ? 0iW*RBVb|ābQn0B1ߌlhfOG4'@NYlEʃW.ezس ƠtZ(L$@$kP2nO]8&^)n t+|%maN){qP1ZdHȺ(,p-+jIaGa^0)vg@0o/7Z"U`ԺDMc`ʛW^LnTS 1bQsJxȅI)5> ^4.{};,.}-XXsN!?49Yņ``X0 De ,_T]۝_z 䜕rl\qg?u$9E-;IAG3֜@|O N6w)ZlRZ=zT◌<פ<6@)읬`[gv O䮜$u0Z+C'KBHK8?9Ү tMƩJEiȪ茲x\D?@N4[@]]6cH8Qg"2xSć2p`g\Ni >2>uFU[O:pH^""ʡ^vDvO[ԓN[a{*>4 !gOŇKWXGȂ'Dѹ*-u}鵹Dkƶojglޕ}щ䂩$MXxFuʬ, JQ' ]UoСɰ%}HLLL$ )Gh_fo?$Q[,+QJ/R7{ C YVL*`Ġs(9(IҁqeWv]\^WɻW dccGk:T./Ֆj" 8qq;]""^X*L)j_W w/=w[ƕ\udEZ+il4NP7=wњXL{ &b+<9j%^77rV3R&W3~)rۻ{()mP K*CrQcҁ[AfvzMGC+ʇ=a͂0_ IPjz %hJ(ldg'hLH;Mՠf% }p_Mܖ sT&cF'l 8GC?UI8ILx=A$mG>౲֋{ HGAV1 [ <'a+dskzsgap8YY?ʉJo_ߏb bXau@X-gC}<3k$+f-NABgX"y^ݳr0okՋ8k49K$z'[w+[M}'#MbgAG)ŨdPrfSXz~4Ug-nnʣ~q}_GL8rצ`Ii{SbFyO!'I9@w-)$Q $R!(:s.s,>ۏ肮Up)@HeGFjra#7s)P2C^y YP(6 ʂ1!*Ob5J)Tqɒ8( K |GoögB;01+5?Q-RmA:`΃Xj|@oю^"3~fE;ы|#{ل0 {œj^+1艍\$ 4iet[_2s;-:#p͘;ZMէ6#jb#Dτ*9g`%}Tɥ= vER0p)gTC7k77iH ,ʟūr:AE]VFG=KȆԚ*ZDɱL2hM$0Eh*Jl2Y6xo(? #c&j庥}(yn[駋Pi|=P+>ԉ xx-´ Y-R\1c'9]jeQ2ZJCc068j\ @*X +2,}ݍ_ vX ZnIB$/q``/O8PIN-l(, =5`#+ }]@`+&|p D*$YԼnv7%9|bUUЁRQɈJQAE]Y(ZH[LJ"{uFdaly{넼.|Unt(qi{[Ru{֬\η# W~TZ!v9DZ4X_t s^_٧dOM `~Kv]X>AFͱ3C.9vf8`gBc O|P]Žr%૝rKv}_("IDrWFݕMEkDBшy}΋=FzmM{jN"OOP18h mFI6}$lуI8&^CjcI_W#S%, U,`@VzDIJLޤO$o?yi/e.l2tt~[-1Y#Е%>_^x1hr*#)NNoqn9%TV+R,f6%ry} L 10}WkO5a iD3P6s5̂!U! 1+(hQI Bnl4R`EüS@x{ꃁ/RuWDԥIVk7q<2Qȝ3m?>Y?o޼^EMy>y&1S%=Hǎ6#Fw߾;D AR3+-((f@ke4g(F-ff4?Ҵ*lF=Ѳ%qCjq=$&Nb?vf7@tUuG@ߊWP {#;aE̝W.?1Fc&֨&lI! ړ $/rŹqJz-sqۦXzJKSOh&}=D@6Sc@s1;kTLj^9n LUTO5h e6rSUH1ѥZWV=Lt\Lb] zK.f%dC';[L-&UMZ!)+\,dΎeIۧ*';,"9 s!6|{Dq#Ha,NЯoq+u-]v?.*q62Xr>],NƗ"F. &TW.Evq44|:l9ȎRK̀k'ȫ+'_'|N)mߘÚ־"NŻb8:-Sn2MO/LAy1rHGCTy>>rU"ѰymeE*yQhQByy GVl[4|>sZݩf&}`ϓV~|!5\nrgGpN9 Qi.cNV!\\(ZZ>ި<0yt?GbJ`ܮ(OP}PVrNqnoϫMB/ϿHts3jS>PZ5ṭAr[p!+qq:tdT}9/HkT¹`VQs˹Y{~[,<Աr\fǨR]1=ũa] ppk6\G4jDQ2sE-Z(Iӄ V`Gp" Ic<@a!SSM `^M@Zn#F=Fyea m1܍in -yB9V7i5 A1GDj6(ਧNZ14>-`͌jXu/5=N'5RR78OTCH]]sGPikiTvsҦCfɟd ܧNbPJko^ߎe;P(v.6^"sʑ_9ŖG"uDf]>JQԴ̀4{* ޥd"zA9aeՔBDIp!ppNHl6l*DnD (>[';T@DuϽg$&‡;D=~7mû 7DŽ/ЀA_AO"Qex"\:{Rd0xt O2ZVqR$~ x99 sBA rܽm-Gw.j s խ֠/ؚEWӇ&FhFCAe҂i7m<ZP)Τ{25 (=iøD`XtxF-hpB8_ן57 101*)ð;_%jVYIu_nFr &hJ]ۧ&SUH{R~Ⱥ0CG&*^\t.OoF)l C]s-x 1.&S3eDF͡,^3h_h\uYUv:\"DZæ(/o(a5UbCOA(BvtxPw^S1e1N9() x:FFuWXf',J/+mu }lr< t0B'ܖ)YDKAgDTceaxno N_G㙃[ŌOTB[~> ˑDyUzUc&}f#4Iգc/7ӁN",3*p{MݢJC7%@`9G߈\j50*-~Ï:uUµLPʐ n蘆~+F2Џy##>RG:M!e@xA5GbMa bQ1ւ݈E,xM S{2q S|'Z5;{^_kʚv׆)$8du} ˲IR%_b$+izi`z|̋0f?/yҷIߞ'}{^շ'p;X&FA |0Vs+H(-Ȭ!Zmw(SsY*i"jg=G56fIMmV3}뉦C{13JwzN6^0u,Z%vFDO 8!'0nqup2O\J35W\@ZbY1#Di \i J?_( ; #|6I?*~,JUKwqu.KC< /R&|F^-?\07X_wUozqBs|OmC}d:[u{YsaeѲXk^*/ pҌ7ks{wv[n>`rc'sc.qP\B^LM1`RWv,]2Eh9J|]&Q&goҧwi\^Ti䥽pi,}jt&C7Med܋eDEd ?&Nۯ_f$ |ˡYl{^3z| 1 dz^>X=h^u$YK#;‘BPj`rJj:FqY2^np=p)5Fh$o=aؼ]>Sz[tw@zv#ҼR۴ky2(5G.>V)Z q (oJM/ fS nNxMCiw F5A-=?g{qWqVٖ{JD^spU_~ZZ;PkS$ږKN-c84 ¯7}$Q=w M-y%q{ܘIfaxR |^t>~iha^Q89+ JFAAUXIl_u} :%^l_q(+@7>RnK4Ɇt*tےgJ8L@4耚'4`ZKi7-34#YB/d7p],6 q}q`LBt%9Y俟!) !si$fꮋF8 }A1 r1l ~M)rn_w7!|%l<0R*wU}Ut날Nְ*FTW RR>!x(fWcvmCۆZDv y5)"7o8sZ jkN;ïuFk.zKa.~g{DZoOnKghm/~aڎS<{^cj`bB"5BELa=tYGW^f>^dƝL/2ȜK:BPlQMUJ6Q/^Ǫw-}z;^^nqQяdtWZ3i?VGT TِmWjeid Oj%[ǖK֋!e5:{+;/p}(AZ/8o>z?&"گ]-o #$Wv]=tۉB˰dGtubdZe廋oq=%dmGx(DV}CI8;'X,sVEXw]n5G/V@|zsod)aw)M4I0ˮF5fqr=$n["-j.LB;g^oI}+XPj jc.)#lɈ^'PRk]rV(zqU4#r{/b1rHÎy6Rn.YvΡ 2:ثdQAnզDtk4ukYN:4(sU/4^~_K >L||w3Kp_>o ܇/Y+z\}&uׂJ2%{G'J:ܵulzhŽP S+Hs. W)7汕\Q CU7"1ko6m~4TGS+B'N,e-jSnu yU"7vb?_㯢_g{#W ƽQW5'$~2c*M~6KUVm [jޓ5uTֱl- r3@rڶw:\Z-AJ#[`!1˔01?jA1Tƣ L=A7VcJ(c0ꀬy$_\N3ќAv 15 FR c"m4I5v ܟ&<~("TH{{ O@t $v<~~sv%__WT'`Ш{ZCֿR0$ ܟSX :lP@*Ss՟ rӧ>Onhf>sVBI>asҦ $A(ҤŠL4˵S,qH7wχ r<+zA] > A}6ZXsٗ^Z* \7[IPKEZQ5ômsX-Ώl ;F/OU$H1$;H$9%؀̈'ʜτg!49 j)p%6hB --RZ"Is.\f|0句Ю$p0SY$Ae 7,DHЮrH0#ZȽ E$è[IřO8 &AK`hHҠaV:W<"O"]%qˬMx )$Wi%k[Elu)pB3UA'd= SYKGZ \jEypBӵ`H$r>th0٪?16CK'.t=M| JxGېKK>w :6co ُknߜZUC_32=S)Lߪ!!$򻃛7FPRҞ ǿMl~2rTLX|g#Yk4r7[zVVLĊ):kƒyLCaõ!E,dɨJ'd("y%W%cWd5đ1V{6_yЯzias$$CfsM($yӧ"{qAR*բo"4^pgyy-Wn J߹ EL.G4/P2c*dD9 wIfYLIM& m0̖>͚F*F@#yչ4KS85si楔l#/i0d&=g,a!Wm\2̈́d=0NpD6XF6ͳlV>T[! #QBr~,&:`:4y־f!"Xmú J]4 H+-iV,utL־fqD4H 3XTG*MF+IJ30p,Jdu%#ԐU.&SGs:Uix ¸_aouuv6fW2Wa]iGɀ52hK\O/ zEQ|C:|~dD)on[_5%#9c໫/>v );f8|< /_0ښebஒcv?q1vOA8 c}x;M y9;:zAXcM^sŹjLPhaiTXj%R eDn$6)QFl$Pc#ȍ$qBFl$Pc#upx, YWXc]a > YWXc]>Ϣp,j0jkyJ^B0~,xui{" TND;>@F@{r|HMGs&1D MH^nB&gRZ*a\'jK#_H+cW7hq」_I~ 7h_Yln Z7“/`я>:-#O~!ǖ BS &8'Rha؞iaj_:_$+]!Ŵm 4sM جlk7հ^v4l~% R{t!M'àJ_UZMMЦ*uyjlztODikᘻ5U-BGB8[*޴&jEk8Tߛֆ@.7F!1mZ 5h3H~PR',si"\T:(3l9c,KՒڌ8j`RN HOXIQe(sc.LRF,tX6h S{fDAEzi@-/l8&Țql(~q# >B9uA*nKhTtH rYZQ#M2=u8Ž^uA.z<-nTz/AIcV~O4Sj~ۇtSgOkXwCq/˼#!eY%zߍgr<@a~;x#ou<}IZس}<+j|>Noy7~{!!/i=.0}3o9Hv(,#|<ہ&㳜?Y_{> =੍nʸes-aFk+le\Qa 'Œ c\uT_.R T8)6REJORzxRjȷ0.R(uߝrj`Ƶ{ ゔJ)]G"դJ$-qSG뽐8iJē_;>R*#w IJXJƍ=T_SMޓ']zRW= ZIuT_.R .[J㼧0.r]JidޓœR=ى”Ľ8iJ5:-2rǗ8CC_Nv|}R`,&ھ7Rb\<Εw5YnJ:W"`jL\fO8Eya]I d.eaWcZcÌJn`nwm͍gW\qq휪9T&N^r$8-R&sR PM]hy#"51v>&y& E ,B 1@>(;$-Ae8F Q~GKGPIg6H]oqj}pZ:r51'˦IK*D9(4@ٽG;d_6r_jVQ1w}S2SAq5V$gh 0浌ޮ!]K1\5ZWAA"XnR`8彧"j'DIk\&(pOD<=qb K: ?dT.nww>Z8h&h8yA(ZaݿBoz1pQ Hߎpz9:)w9%!JL VkPeQgUIƄ#dTZUgdPe1D[>eMza4s4MuQ$ь*@nQQF\jYhOܤb9a]\냔>H1k7p6I2f3C!d%3Oᫎj~vjgj{f~5wSOTUo\:S @gk(IƄj54fltS`A2ՐWۗ'76,tS/,TA8.(R)%МUI elʫ2ٝkTI:+POaCP}E5/Kp? RʩbAR  q)#WeW(v6zܟ6 /e2s2mB_y8?cr/&c9`̕-}K/^./\^rd6Ki$1O暱T:d6sn;QFP)l>?ݖl2M 7tl =N3GGsby>hnv3/̚=-Mjí䢎l%Lču{ $wdZ$2B,6 " e曭bdT BF0U%pAQ֣TDx9蘱 YWň+";򇿟PO@8{7Co?L #L8)ӯFܿNRqG$23T{"Ԩ4=]wW}^ P> C)ը&xԽOl a^'|A;'˓wo8lM >F"d]F(T$L' Ă6ƸeHx, $֘Lq_%6c6Xm*\D.%#K'滗0w'')+,aYc BH8R:Ze֑ tN;s$pr}f&)@)JW$վDb\"{NZ<0T >ΡX]<#5ؿfNғI+m4_I_L?jK;sljRB7^z+ߊzx5]HpuInq5_zD3'7bἹ7\[ѣwՑ0ujg䇜$vtr ]zQ'0׆ (vjӋ W[ ̀X3fe1cRK$>"CI&B"SV%Z+` Dm'c%At p7W {O2H,R)FUT-,3 #2\OdTd")I)+)FcLF~1b pD;nSLb*rK~ʐXi7+&ƄIrtF4hg jͲH![{ب)sZ5Y-׆ŢXTE,dtu5ҭ g=gj|iHSXlJI"Ȥ\jPoF>L8kZJ:rz)s''O9bZOQ}IX؉Z:3Ɏ0e)׋f7|O-[g:V$:*k΅S`Eˁe]u&X΀Pzxʯv-\_gtL빘^dX{݀ldol@ӥ!>xCd.\UY?8`<8bng1?d #d+nK=h;l]:Z|L/G7$1Z'-CZ:Gnco JQtX&)D*pO{K|vYt3Mqr tf}쎞?܌ L+C4fHޘ#~gֶUc!b͠Gv3 8iKMD7MץZ\&(yjۿ?{'DDVTֳw2brZ"Wl컼.nG7Oչzp 劮[=y[<{s51oVC\oVcoN>0t QN:M];(Y-ҡ l3LF }ELչ~BdLCt*߲[^hg;#[> k _uni(@՝91J,P6 ڴPxxHqzZcP T ) "!+DAߎVb->wp1}*܏1rǘ~XBKUf$.b\M%ÌUd1rF-ŇkQMF[ JRTh|E%Ghgҝ+Ϥ~H$іc,"֚Kl 8˜qi4sg: Ff4ㄊ&R;̥nVWaUrmtjr9hR8C?ɞoR ;r z?y 6Y^>oG{3w,RbGύy馳Cdc,):/Oo]+n>ôX}+Z(Lji4>3Q07n]~~ n\ZW. d 7BȫG֔!t*ڭTδ[{ [ELC=mkbϜ[SN1hB"ܾ[㉆j6$䕋LQg T^dL)x**-gH yCYn` >L>>=Er|lthM009!hẃ f%Ғ |#$6e#MA0alW);e7|*_ `G`y %t*a7Th ˫{Y;I5ƅg/UAÝe/UC A/f[4zZ 6vLӝkd)WQb sκCqO9g:0 =E%DbjDr|9-My0t*9-b%#iin洴!!\D0vJڭXأvkʃi:F!ے(AwݚڐW.dJaXU,[SN1h1fSvk~yzvkCB^֖)ET/z~*_6 ;e) jN󗤢&ȗ}Mi~cS aRJG+ R aRJ 8^{Rt< BJRGR.(-WThM w|pCҰjÛQ!OQ}I5g@RzRNIP?ۥcR4T{ÖRFä{'(2&9Ք*y҃RKyAH)ԥjvQJQJI)/AH)H#Af2l RcPNvȎEIR*rNc)Kx,ņ4qX&E,Eblj׳c) RX8/ 2:;qc3=M& :@Ƀ:^7k5 oJgq2/I6=9=ŗze/{G:m۰BEv߰5EۖuJ)쀤Py$z$ IWڄ$gV4pU=H:H %}A*)6VWVW}%-$!q}%B2CHb}0IK@;J;`GKG`iBx{]3ͦ^.k,Kv/-qԟrJ 2U]?Zksw&I|Pӏv6ɾx}Ep^Ow0JH: JԊP\Fl\2}<,4g $oԩT;QUS8;* x=%f jDZj'8Vmoya<u_]T Y ԞNՐl[B7H)p>rË uei,$q7 ءed@tYS'iXT mwR}d so8?|)")DwWGDZY(ʖa>K䥗K/^"/zA2܌Ml RSbf r+1@!ɬc W1VBL5CShkׯ&h:hU6eU><n5]XdtBQaǺKuY5fZ!@&CugwmmL% q$&vzK0cTxכ~3$5H $\HFFwI ˝ôrjffL)$%' ҡ<K3ӊa=z?MP曬ؾ+L['nRiqR0̞t4U!9\}|V8#&zT_G)^:S22t5)fZ2@:cY1!vݳy"E.ZGĒ;] l+,+g`TID`&f2)6FSkC-=h,33,2^hB6g֏ G cS"MYfHiRl$S=?GY]b7,-[BŇ]x ȌP]gGv 3KFho麒Z3_Qd|8ǜL)8O#u F~`lzK.lގˊi^_|#^0q}ΰ0d?7w0Hee`X!}p4FwdRdJL)'n9Fe~ =/ʽ6KO'W'7.HD?r301_L/x%Gߞnr:l྾ڜߚWw)U\|D<PeVYXV LCѹx=aRGO)}HicO+U?_AsS2jLjhb?A7qXpOeu wx3_%b3\kls0KJL 8a!;HnҺsiL-;_6~2Țk:;q=ńKqi88 :90?};՝tF21a\}TN0g4G -Dtz;]ӚkD7Zj+#en0#+YxYHYVT_ndkȊDʊyK /=ΘeEj*.̊Qc[O6:"_IT// ȏ)Tv0)9IDHbU'lNr%/" uN&n_T_/ ooxbu$5$w^F45A){;e\OffR~2Su+inx*q =)WnX,1|7Z&0ecPSˠ 7>ڀj~-̸ko4a9zk,eCd3dl;wD=yEL!X8yK5x3aOwޖٔq׸vvI&5#>>)Me#iUR95Że=~O¶LPcC !ր3}&`͟Wm\kỺΡ`=JV0rˊwMyS@iuj_!HwVy\!mymʶС%vkʃ4vc\C/ܶvk~-RFcedF٣vkʃ4vcSS]Am<@vk!/cbLhpk|kn.E׆`6;!p0G) \SlQ|jyV˩rDB%"v<"b"yEmTK%N(MP, ˪Vu%ط-gml>0BeB<PaHۡ,9aiK_\x@%JH no]%Ǚ:cM22BW@)|T\)": v͢=ZcxHAv;sזX܂]}o_Eu:@Sx{!m`?:jvp0j՟iBc5%n~lpΑК^FXF_F.[֫h2r= 3"llboVlUfsxc;Ssk8!lNšVzGO0VqxG"(T?x%)×P>Z5ƃSXuu7P[@\r\?QH.C89! *kU xw +[,rƆdV&H(4AB Jju^ w]7ewbYn6Q/Qv<ƲDsX<TR&J̈:Q5BJUNET\8MiZRi-Y(Qe\1Ť be50Hq F 7ޗe Nv1oSwp9qb)&H,36Ο NĮVSӤ%}Y~, WfEQL%Vh b%YHYQDVZ 1Vx 6B0OhpSGcQ)SfoF3 ĕ2*c)kτ΁I!0́wڈLlhcS!ɱņ.a uNrƸt.Tpb iUWӘ ܚ=ؑ6vM&\) r۵UdXB&d<;~KבG#O LVE9Wuh-PEinoSˎ4cQeq( F׿ (S Iz(:}2ԽWgߗ({VL,\&h1ӛ<2֟QuJjңF)Eq((Tst((@B`vBQ82\RMQ}H5RJN(=nrRJ99 1]ZM}Bѡ "PZ (%?A".ңF)q(%4qy(}TʁJTơJJCiIVңF)'q(夰K+ŗǨZt);ңF)\.(zT_-R (/=nbR,8?b҂j0SN(=nFEvA4*?q7JC)o JA8TK ǍRRޡ$mȕR>?OzLm ~jCx,:UItD(_O~s->f,E)ڝe[GPTR-ծ;I NZmU<(ua;*yZ>H8r vtIZpsǓa(}&]wfܑ8.0S/uFrKaCLZ6"S֞Yj]"'D1{is%'Fg,3Suӌgڜ )H`K\,H;oc|;@ZŻvWs8RGF{FFVd -ǷdXJ1PB@> gM+,$yl70Ce!(F6#a$k-P͖:&\$*0#U( tN{fxh9@NXEd#? aiMʈ|/l7'b4@_׳ψ,GQV+w0n"8Vf~%?S?{WƑ owY߅sV%k# V":Y!% !=hD{멮ꪮ*T}%{I4"իohRj9xio߯]㡺2 I|d2[kMb<Ƶ[IoFOYogoՠR;Ȉ%PtGeNvӻu{ɖFźѺ 5ҋZPxӢokU4[8GՂh젏Ƙg1=7LE,mpԸi˵fqRWk2I,uU)K RA83l"fsw ʿcX,"枪ӈ &$"n,;- rkõ^BF<я#<7#B Vmܒvr6RPAȣw6_ځG_{^fƠ#L^$>L]tC2'*b YO];5A~񚸌(ZvyL5+Ҵkw_Lgg,+tC]V/Bljz& !׌VQa*V}i^Y0I:csM_ZNDgA?\~q8%7?o$ȉwl|,YJVܨ|{lh3>d9/_>-͎ՆDQ&c\iQJd8fzGqW$U']Li1q.,-^=Ouo2w΢c333S&H֘ ,11Qps=nd0.&nXn ]_gaݾCYC;Mo|}02ƅPm뮷zLΤ0Ќ)9BoP)Y18\d0iA&%i>9%Pl?uh&s21wIR/h 6k]ñ|>b"5IȜVӫ?fVm'MUGcc|<9یf+>h%nBgnt5tNELAеeRuq? FK9~+gsa3"9[\Z.dnm&vki6텘J&znͻyj6$䅋h#fvs%s` ڭ)|D;BۀNbڣvk\1K} S ~[*R#"Rz\_SOb5!tdaq(vzYK=k$3Tȩ^NyxgL5ly^+(9#: ө$) HP9CqLhes嵆"&k̄Q.k4n^jF>X WMU7Ы(51sиH,MRkE,X,b$9JqU$#k4ϥcEL̦TO] -l}9vMD_"WA3 Z=73YǯmR=%#u5!ԮX737#hQɾ*KJR5W|H+*m B҅1}\okǨH^[}$ޢhVG%kkQ&d\R%qQP(*ц5S Cm=*٫)s4  6XM]wʹx^kCy^nl7_ Wjo>BjR( E%;WJ$T4 Agb*- 趣^6ި; 0GaG&մHC01'L8㈃|j\[PJ*9J!J)"Zrh=wzJ1ZR=<Tt8)O&ՔQz҃R?W=WtzRJ']ky)y]gO26t 0bm޻GG%ofDz2L^|*&M*,5Eˆį|X&…\Hy3HЙ(O9OBN9Y$Oq_RFKRU; W UH.6az +H/FvRFHW0$@\WJ܇ 4eA]48Q@FhO@iRRO$VDH+@; |i09Q<#915*91XʢᜧBspUB e#Ӷx'y1eY& ő4&i&r HJxhkQVY4 +U`-*OHkExRFS"m1yV/W.=Չ;ś:-Ga!g'L淋}UqbRիop/H)\ ǥ]|OõFxc<"E$ wlﮯq,7Fŕo9?R?"{{Vj!B ˛{ZܹTJػ/\|uʦnLl {ڀ\Dhofݸ}~ƴ>|J8@(R`%us YWVOf+E/e]gLX+%R藂`]O&酽,g8LM@Knʒڤ잋#h*;=4>}Y)$elO7U(59F0 |HbTi7@š3Hs(,TO]鷏C,E&O]h+pcd0~;Ͳ?fFCBJPg"x=S0p_゚EŻCQ&%a ( D46՚IAMN LRދͿC(Rwh%nBgnte^.F!ZeR/Ÿ_ yUn@nwqv7?ج9[\$!/\DTXm-F0cnMy#:MS{n͓=[EL8*P5偏4}GvҋJg5ڐ.eiivﴯNahpl-'CTDj@Y7['V#펃~ku_A*;3s.8kCkڎkgZ%:Re{ |l2P+ +՟Vg=jASEy-~T؅n,_-t-jE]ȤZM|57./]몍^#1t>-QX8%!X(O,TQ^Mr4:;ݢ,\On1` G)39v[ JZ*3LY2MϤ2, ĩJRdB,֌`ƛSכՍMhcءL6e&m6mlFΈO⹈jCl)=㧗y|M.ʪ=M\d1E 7R22?K!38ߘ! XZ Xz6$!ۦ/V#(M&Q 'Y8( JSk 5r貛j 6bJ',#LQE2ܹXF '{L.{5ӓ -f3Y2%,ҙw䟧E鸺.Uǀ٘nT QKuSqWT*-=e0Pt˨5{~pO0t-UP 1blo7 $^xdzGRǍ[aUX_`מ68_ց=ɼI]/)i=f4Ϩ.*UP)ty݋7m5]6h5*Yס{j|T{}j[=٪΄DMy|yNסa7Iwo|@0'̝ OnUu8?LAHh+w}__Oӵh5Իn4VW|/G`44hܳw.rS6Ts*^wɃ.o~=>>/U2pĮWӛbfL{[WVKwiviY-ɮ Ʒ!l~a=sH9M!(F>RSRɳ83=g Rr8+/y]jXBfVZdR%CYVd)+ 9I S+eJrU.5E?)& HyȱOQ-~V ­SCZ)?+]JM =SJQY)jg a>KMIJJ|t{BҴtV绗Z昭<zg?&XOlL^LY*3i@I,;q kvk[ c|)tHMveZ\ԴP'oQAZVMW"p!O,L3] Fg4 ]94Smjd>y7N3tFBِ{#OPY !⊄rT;U |lvsccRֹ,3;ΤgH> B mK^ 1"cp#(M^*Tʂ< JDɌLJ9ȭAQHh.%{SΌhycWz9BeH/Z2\Y]U}&S%T*P6#VFRHJYDPZJR~3S("|SEPwQ x;A ;EpGqO=K-NGwf9<)'1 y#3qS2iʵbB hI jLBȌv\QzvQݶλۏPub~bp̱qhOO>!KKX~}K dzx?v&B bo|=3ˏ~rcՙU5<:n qf'|mlXݒmwr B|70!F.2/R IҧޞH8#9ܬ!ڶ:=OPٞL삒JmǍuUzwnkE[ɞtƬE$bqͧ?碬mC54aw'A8̭VR̤`ԔS$L=ՠ Ŝm;᪜}\ 3 Aw?Rюw<uٵd^x)jgUdgTʱ0icO9\R\b0^P:DպNsuNqo'J&3Gr~3.q"g vs+>#W ɁX۾o-\ݹOty 9;Pq>_Num&k>KNp og=L$!7*CEIE^)˴" ʢ0#D%3JVbiI  q E8#3 J1a6AQA%24SƅFJNKN+cl3ͤ'2HW9}(DB"@J򂕈9/YD.AT L5f%E2E<N@C9vj!pjm IsFܺ* F(2 @JPNPXWgҲ$z>ex1>!+G-YrJPF6*ߕL:PphO$R";MaŴz=;uY;Š* A\FUce=eLJN\~6Z3h?BGp/| {xUϓG'?t1\EV+%}uFkd='~%|kWwW_L]}^}G:Y_O}2B[S]k ߩOb*jTڌSzb]G$ez=RsTU.M\pr7mvޥvCePh Ƞ Aኦu>~۹+ݷ}7B&¶N! Dk>FjI8PݰF4EcOĵG?۟ @C2hR+;;ԲJ:f׆5 CUSTwI9qe{vzŮ^Mfwnn.nS@T,^֣߾<ļWaw$wz/n|'/F39Xb֥$O+O=YTT ud!DlӛMQxX |L'!m)޼[:{[:{qyX;Q:~γ+=4G5]%@[([wgx")Py^rOyu´Z޲H F_8ڝJ1יKϞȄH:LƏ##ZQv7!Ɨ4BԴ$07S) 1L[ V%!6,I{v1ZIB ()_dbZ$ 1LRWkn $,MaĦsu  Q`RIHWZ P?v1jV})p$1GRs?X %"v"/G ^9K-(G&GK7bX |Bf7?*Hhv3݆O󚰂J1Uz~Zg2Jp cˣHGE_<o#+NޏW+[j-&߽o|~j0PDF#*EA#މM)#ye>HiU%88iU%y1JD6)DH2$N'TƬflf*f.U!s#-P2ݹo@YG4؉d(2OfvQ~qKXR [+k˩ʸ4EM}f{]pUG}ofd9O-_eVTOE]Q8KQkRPU[Un6>Gfu4H'I+ݜU#}T]w\l x~W?p= Zi S(7Գ4v[@ty9H^.sǬbČS1qpŪ\4?w\j?X WR ]1w! m9:ŮvRׂ~פF*[!Y"1|G `^Re/z˂ׂ?K#cj MLRp<, rfBAI Nّ|1 ;eoy  )VeJjZ*MܥbO{uaZ JopES`Qg|֌;=hsGbެKH *t) DQJqy*iigxi"Nӽ}Yj/צKʼnq2vY{cOJBNDJLd322$Kc3TdSS?)x#kUJ#H)-x2BVk8A+%[Uq O! ڳrTT8ݬG/AH*#$٭wV}]"[NWӄh%W58R{8=(`tl𢜗N>mf #NZ4Z~"Xc1?2_6|p,yJ8J;*PV!ÌiLVGJ:JNx$1{3]d6wQ|(򉱻"VGC<Mm)fZxHZ ~~yz2rW$`󉫵`s b.&Z/5:[.o>PN)MYrP/F9A; <Ŵ.n ג"jZ7W 2f}\k-#sdhI;K}gK0D^I `6VruzEp͌>uG֚Rԓ c{mj*zxJ *^ێJ۹m֥FvxIAsn9wi^?_RfC@Hk {եJ:zlR5k+ԧZsVmӅƛ Ǻeh,{RXjY9p\R؆Dh>UlP֪؆oUl@ hnZJ W_3TWi!/!28[(\:v3[y 3OU`'ip'?-L7fOvA8tA t$V9F8f%nkP7lV9J1tw vc*XB7m2(h ޓ )keZNᒰ/)Qdgsl-ZTG]tl˗ nRO67GrWst72حVg3SmC*9uzOPplKBjB~02ON`>#e(%koÆJ\Y|9j$-0pH5㜑Ph,@i%yf,dAnTYu=ml_&SYj:vQrb2#\"SGCWʲ L6%3NO =ER5ѽ(ݢʌ_-J08ܢxSpf.]ϸ3FRY˷fykAij5kǟL~4}T+My=][a cHٞ(3TR#ef(G"՚^*sA97RcQVh/Pz&@/(=krpC)+oz\ں.(=k RPޒi ҂jzA]R\aJS=ޤL]Pz( ¨JQ=R r){(ujUߧ|n$mu\R<+Ȯ(ɆGHo-|>t{S$w6$Jպ:=D(@:E¸\$&!"a487Z |TRL?QBoo'Sg]AFI2wPWf8LRT = Kᢑ$@Rs<+c\)[M4₩j]rQ)dL1ǑX."O,eFQ YN*OȖj_j/ Гږf.3bΑ;JeAXBfIfYI)ұ4W A\ldM ~Y wdꩫdrdMfy0]K~fo *d2G2-(.SFD-"D9 895)xn-םߘ?_Jj2y 82zg5^6\9zvxX-[=N"8sR$ece?9ƌ 3"?s;w?e%^HK.N'IHIgN$SSn4(O%/tڗC:"N~ WP);ˢtY[7g“4xhZ֫ӽ Ś]ِF<.V5(M<#iTPAd ic=$͒Dg<ڞ,@iܱ5Ҧ?։wGcB{Yp˧LHHGh^SE4Mhn-$%D DpJǜEJ&F!:s@pCUFF52c)aeMn?oIS4}'%)!3iUWfZ\$9^(T~M {eC}JeTHj7JeI,<_1}e~ss(`派W!3SdL0S<6[ƛ=3+NDP *T3GŮ+tr9+NK})۫&F}SLVYM&W!MQbrY^KN?f[zp` eO=$o^^O'Яܜv׫w ^)a=!%k~ b_1PU꜏@/)d{Hv#â#=. eZnc&*h.&%ԅV1%}}dfai=d@]Kzju=pk~Ғtq.y| W{tPMFC7>oJ(ԞO{WmTܲ嬬ľh)?Rd3Ɛ)Zi=/hJٓeq^n3ɛEz>Yֵ&.7-jy?'wɪ͓!s0lvc6HAK턮ӻl^||ћmTlfA>Rvڵk*rEX-aZ .rr]Y#[~;yԩ[ Caf۹M@* A ƻgқE1D>z8sڔ6,#~'8ܙX'M De6ڸbWE_w20 C[?n.n(Hhx6K2ar)\ ok$4buUf&WfRCκĒ2Z;IGpr8Ǫp e2߼JE12Z,=J*J42+X0PdPiiUBl @mUa l֒?pVs x|T[h3{+|!Kj! 40{gH 0;n͎fHɃ@/?v;E )hT@R40%%KlYRy lZج$':2lL`2G|3V x$7ZIk5ɠIn|kOiI147LesX9HSӶc3M8"oQ7\yztы"od."cήbTs̓E,8e00}9 XB}ZĊ˛m-Oo"OR[8nZ?6_i'YO@FP`#hi\DV$٢-V]#똈 *Ka]@oL&8fͶR_g&楠P+?Rh[H^mmbik KY}c5D/^z^JFo ¼Z*)/^z^sKHh>*ԹhSu&p,u;0`QiJgҙE(ڶZmyJ^t}7VVƥoc Ra^X-er^T/ؗZFeɩn]٪Lo\ꭍ%Eg[IjloOmn-AVd|D (@]:_o+@1/&[JQB,.2E˲EAxu95{tLHWzAmԋ~s rְ?g@Jv{͍`īx>d߷/ P9CT69F <^O y`)c$MjR LOi׻64ɢ9׳>,*D;3K$ X0-Z+R1F=xzkB}᷺BXNS})w60pوwC )jsoX/Ӻ*O~p~eFtiC}֊WA^XoSBonӂ%Bu;ow~{JBZom3Ro;Ϯw?s}"+ mMiϟg~״ l΁RV!v,1-[N id a&!h:jS9f$5$y{b-6Ro>$c7->%yǖG1"FGgqE1ֈqXnY6`)ɴ: =QD\ՃT28LBk utZ`:-eA:>/I:}mKT߾+ۻVrOR1ӧd5A7*뗗#?%zz9%%Okq6"{X 6Dn^F&GfJmg׿w-*WI<䃻hO5z7SBFT B\'1mٗi~`20!E|g;э<[*!6*AѢ[և|pSV*gUc!oث*UzBx)H1oF?$],eRa^J<칕Mlf[zm2OzN׷[g.7b-z=Sr*+$% ;h%'(RQ>@dӍ5ȌHs`0k #G>m,t)bn, »fŢ(ky81l](IŚg-DΞLJ{8 $Gf,yK gŠ0 X5[43`Kybb1AE Ri`hȆ&`[hإTJkn,JʅoJS ]Yte%bܢ-\h'2eu]B`@Lw&L2>wrRRK:6/t}Yʬ)iDeKJBS),+, T;r d䡜Ya)vj_. *a 3[_QQ ,0W"6Da-bh-9%7`,cٔ>DFA%j(a|L8 {%}t`lup>?oCwϮ9]KӢ|sO]eڇz[Ӏ/GG?_N~~7ߣϟ0_ym ϫo׫j3V>S]WK ȦU. nrkEB˶\>cAԺvqXw(_~硫W6B.${̿*48ZeSX/6ȇ7U{qWsm¿+::OtC6_͘Ic#96nlh!0\mLWHMJ9"~Њ)"{OL{EGFNJJv(QU13,Z?Q a slhid^؁sX-]Cͱ1p]6]hY֑*l\>@ipxe澋$5]9rR]d8rM.5 })y%9-I%6*DwĖ`5 J2؝A"sKBb¹?Z;?ظv֛m[vz(I{HX][AБVNg*JĶN!HI&RbM/[9e'4ҹSE+CUN7A)}p맦Oeǒ3(기Tf3 b a(SUg"o;ȵ4B>ZGV4DӁou ;CRFD'/1`[֏:x6H/G3,,L_#F(ƹ4InU;%"E0bIkႜDXH,-ى`E~Djy/ RRG 7$mg_+UW,0Rf;1ϸAAJ*v((<' V!Dwf>WwR5 B+ڂ+sP+>:ER"R-}[K[_(f]ar!(R h( UNWWB#Yh]طZ䲬rr2֢TвAQĥ SEhq9ֻ@RV-s~^(2QT?H,JKe *6Vջ@i.BHlӔ)(iǩ,Lޏ-dq_OPFj +Ġz˫oAF-gRӄzk-Ey\ /|Ã畵+Obr!3Aݘ&Xy0{w kx[+d5vڲj,f@i?vŔM T5LN:{xCj}b+՘39wR! ň /<6P=<䃻hOvsnN7bۄ)h-Y`tC>b3<'bT;Q'\E ,_-߃|pSaJa_ ҺQ*jiqo6ܽ;wGR^jam:>-ݏeg6}u:ay\q-~$Aq7b5Z FFrJR⛰SrL/SP]kF+,{Wc/'ql6mm43NE&53ԃRSMRԘ{FY}N!ɂ+{4`={9T'AQAZj%λϋdaר;AUPcb(D s!͢g/$R_8-6)"r)TdK|M,T@,~'Qaˍ/ w!R/8U nqK8QY {6h  'H?$￶奬7^>s망Nኖ%޸܆5o]QӉ(Y-5T~U֩S[e\@'QKLJP*Z%K.E09ebCѿh\?G^:À!OG{:ISRVBo|ֶbwPH;~!RfiRKMLf:db3L⩪Bґzv c#h!pM_#k?Mҧ}g9y W]e_NWSūk(&}lKL*eL΁X H3T4 ր0\ΡICjU q홐٘ Yڱ xB@^m\CS/Ths;eTT $Ti3Pi;) 1##Ì -`eT ߗN};3[Όpv3J9& )ƌ !Sx,FXZ¯1cA$R;]01&k4f|Ҏ}7%h,rJA9sI})DÞ7-`EMuZ«t?\{"h8tBRz0jZG<5BƮm"u=pBRhy(4 t D4$HFp-+R%Z~SCϰYGG?mez&L]uqg}Da=ݲx-b4G!9: ֟u tm14{RQF@:͞갴LTZx7->ӠP{4gIt^cZZUk ,3ٽ@xF!Qlw1i3aρʻHrѤxgQ-VVMO;sGay}и?˸a#qam,]uRuºPB)=XR=euWd֌d7;HcݾrRr/F#ĵRsK[|/GTQ6fwWy=>O@ez箼,Ìkkp_`Ehw}U(?J`{($~I;dLxe"I,u gފ\cUldThg)smGgJJ7p) sr viQhY:Phi(~q |I<T{+%}>"%658Eг??YLJ>>68}06rykr VʝWijC6:1>E!rRSFwneV޶-RA㋎$xhC@OvS[?G#{4I+[t]PJ E"440ׂcldW|˙g&ͪC%|=ZHugK+_Qo_XםRΗNi\><p_u}Cv^:cRvI^;{;t<30!Cb3:%/hu%$ 0;fNK7-"Z"oX9vɬ 19GNs%rhzB&y%8uȅf*:]Ֆ9r'87f.ǥD#.ȭE9 +q:GSci)!zț!XYů3ڵ`2ݫ3KOo#5y4mPZ0{ÖKۤV/ka8D5Mʰj.gi7YʆNz^E38={A8c.ZK[QsO; Iz;牒j=B=~ͥ 9<P~{O97Rc\J"_#]gVhaoVRsg^h!o&DAK/QK6uc&|Pg{ZRLL7jGYiEJOaJ%"X)1Ec1 N1$$d'y *}"R%! ?_Z]"tQk݋}wP^GFߋN\Hٚfߑݻ_./r9K}/He۟4ߎڹ2"+/j*q*Nt;/PXHQgC=T6m_lVUhk?v~$rk~hx1te`}tNHC39 ;0I[\g9UM AfCcqE[1-ԔQ'ERʞ$H0&BR$E+qrG|*',82MRH N3 Rd0{F6subXQqjGX\-^*l_8fuQ5ʈp.e.*[GT(zZHTboPB\ VDO8tKqE^!o~ /:J?-UqҏCRlK*.[Ki)PJF7RnCƅkg6gjTCSB[OGAE$= 7xf2{|5iu퓞:%r ;ɝg.ނxfd39;O#YoF2KE4Mm&f@Xm%=DWWFCHH8Bc f|c X :W*FY7׻|X) Av(K8#ђb<;dLRJ$J 1Z-L"a6Gp ('_{|-/U ``~(5 Pu]X=kwt ҉Ee2!i YOb2kL1*dё5 ow' 1|$ڛd] iV>Fb童2˿{tOފVd昷k^H^#ui[q VBk N#ʒP>- ^b,Eko''g? gr>m=ܟF7ïZѢNtό.OqϐƣoQ%xn0f*gTxW0J@mFO+Yܸs6~b&@i ;'nG# }"o7oϥG)Tq' wCJ \9{dBPG{(RSSt葃p?H6?8]qjֿ`pZkBq8ǪYP[bx0&]Я`(q < Zpi c(Q MK Lm7v*y`9.)Us\jC2s\#6A/4{ _(SPnJ x̸Ief N$d:M)+jC3yyV^Sp:R ((qO6HMATd,H]6}.3\*\-FnNz:wyǛϴևx=[Gfa5Mtz)bWۙx;zJBr7kw,_Y`bRnˮ|6u/N/[bt>dZ9>y'lfY~hHCp )!z?mX7BK7gnN3Xӭ:6[xiݚА/\EtʳU .EMwfjb=a> Gd^li/?~u)_M'WcZZ?{{sy7R%$Xz\j C e姥DZy/(?---h-ȏoƊ3!_+哯rQ hx c. hDx"~&Oa.G Hu!ƥ$+FN$M8z&cv8_<I u:ܘ)oCddjfR-wl`ǰQ"%}hz*(I`8Fn4/˜xg1zCi^-OD` spD}r'Շt犤'|[GU u/h$Ȟ^@nPI4s @EQJdTg'w A}]FJ!dn"N:V ( #C3psp.13)E3C2&Q=Be=e?Ee0#egHaHeVpRRBRHD{^VA3ڟ:E>ן;ޮZۺ'z<Ω7$93ƼE5 <O+]'[M{Zxizp`6 v6Os^[Aud~B-^ݱW.ٸQZwnVǧQZ+rXw(ة;vH2S`ꀂBڙ|ջ-0L6kPMOYnTjX"jRfY>۟6 Rop߾eXё^g-(ۿٟP"bRt8mbgy6N!!_F7KnN;h]SCV[ E4J:~ZM6,[,!6mL/U#}oRH.e QÒZLV "X~]'%t-8$vR'vѷ7cU`hL慳흛ˉزd]M2)ިG$_؇o  % bHIG\I8dl<%2Ch|*ŒFۗYҝ#,iY%۰((6&SNWqlR]"b L*;Eڈ6Y>>ى~:yDλwecтw"ggJŇ( F1\+1G'~k֥a27 ̋BOBSQM3TYMN!goYπ׈a yѢ7c^eZmomq/2<('9 DNUO&d*zՀF J]GsxixÙhSXRKWsD 3"fyND)HUAnQB3~)+Q*KEqS8è+s fdl5go[5R ., q6e2X-Y3CΩ"[ɐ>Š}\=gF(Q'mU(sOR}a;G+̝*rl'rWMp5clCU䚼q^0$Y7hF?Ecɺ ߎAM%@`L!'dlt&Q RVch*noGfT!B}Wϼ<cJC#3hwFgvul;){gsk|7:kl58v_IyGFpՅBx|ՅZsSDnMb48Ù"Y|!Ė`@ 0+ϭxnVsifmo&K,b!NksTRtJibᗿg7-(AD@"LQ-;D6,g1;-˧ obS(L򄣹M4ư7ާ`2>^?>/W~M~R;]zf /Yq/{NxiPYÛ%3ߜ_S"xU7MZ6S%%Z3R"!=hpN3sk=Th\z偲SZFWv-(/+{#K2eTEњܠ ?G7GY! U:@EŌQdHӴ YtG/4m!B&I\_b t8cf5r=TudiUWnPyw끺dy ; ~v˭l5fT۬_3r r;G۝uMuwMs{/+od~\oon(3JjuљDzq>7w}쎸ݴ׺e巧e.`PLm;l*~Zw4; KiAE^7L%3d 4ˌW@sGntKd{μ5y!K,{.k ==i0ϰ#|| \s寢yKDZy{yjՀ>UApO=zӭ23t SO#IOKB\YwmWI&6 κZqK1KB~ywdCg\nr5fm_>44JshJXA^@D{;ackDA g hw)g0 xuɯ7[ACFFSK\޼:Miziz;U#sz4LTD [WZ^_-R^7͛m]Q1sրcq AA\9l}t+H.|9ͷ8f=w;grO|f\Os0gF|ZN2F6\\CgO޳yÝ xaWj){iڙx-<-H+5sOCW lҫ k͛]U{~pAE| ^ _5P |!(x\NDNz`h>2!eڻ HUQxňkg qE.FciVPFS묐DI,'tADOa,۽5 ܭQ4)Wav Ahvm& Gʣd YVf2Ng'm(2E -1y[iYfMƘ-uVX)-WV;h*F5C!XJ-JPjHbg(96Vc(d9J&Ps 5Q#0<3x&@StR0Je]=gT+5?ṿ^j~E{^)g @N>O(gkym-Zhp zryH)*ix-Gߨ" S!$XY߿+Üu@:]oG2N龎A@P[qüaCAc u^jyZLñZ}&t\as&n!\N.[Oͱ,r|9nFZ'ymkE;aV|84@#Iׂ! gU~NRƷu`yjjXݥO䃷ĿfxJwzOnonM*:߳=1*Nk0c`ƶ 73i[/N!P{E9G5g#ЎVCx G~(jg3!z PyLdD:n,G&X?ByyDf_JZ?Q)r?t״oWSknGbۂRVI πw GwhμΚE:|dJ[p5v2rN?Dh)j83BW<N)AOdo_RB3rqKs"$ݕvpz\<>$EHZqkYNF78'0D xWm[s+yp1z!A[ 6ns?{=[HC|onU3kz0D?Sw#y!\ UX6)Ũùn$ UwDB8K[3Uh0*ˤ`He RLY"SĂ2  +P5.ƾPo3_ԼcRFc'e|0hjrt}YK. {CvgQ5X{o'YT}%9)&h[,޿[Owy]j?Ayw׵ZUľc[N|xG~^vs n)$ (qv/rDAľvϋ(yi@BBp)QfL͉!CƹP(<\b2Lf9fNdxn>k#̟꫻e5,WwΔ28yX<~^^JNʻf_nݬ_}S]ݬ߽C<:5⏯99mAVDkA,N Ԭ^* dU{x|ZkkH$[KtL+CL>唳K늶,)$x1s 9~E zt29u_ X-&ivҷtXt'vK,nXh?G1ѷts}{?#nN_3;/~f eotvp6{f˟is?q'~-"݋`7 ooڍ}vf~h;5HcTZJ>O`Z:V_"8'N a(9Lէ_~U=86롱{eQd5[ּ/ ׺=NjFl(XMX/ň,&l% \:$CZ8Kd&0DEnCdYO({)PjղncYV/ă/sj?؆"#uy è 1Gݚ#"TfA彏?.ZzPv|5 ';&26z$N106Y-/z}b1Y f5nL@nZ Jz̖+68?Iò$=^`fTi <܉PW! 4*ϱirԆm!Ĭ! r7~ r# ,IYS618DZ%cڠuXPԜeb]yiu"yL\(6%o'{3FbzS={5vV'.PMWɿ~Onҷw7L?\^>V#e{GWR@:JTʶ+TiJ2eqOO֢[54\|qFn A޹K[rw" J0ҁ>y0ę{cU#6FIKiN/].wOQkvdHq#M#qci$bfsuJl?Pn")86Sh0.g 5gTN&`tT}@o6+kQRO(ܻɨV:uUm0py`O{/22Jͧ7} |DCa 6^{ooax0F{/Dދ5Vg/{DKqWԒtf?נߢXѵ}Ũ9.\w(R=h=j^B*:t&%qL[㭶^X959!x5`9u\ra5j ة>NF #rTfcN2bOhzJkͻjj&>N7br/SB=[vK?ݦ@ȉC4 SW?Bn7_0 :߈nzژVүf)r`*t8Q&d`S!;Q$ly.HFq`㹻 @ް|訯@mm| 7Ձlԝ`lGR<ҢYJ9XnjRsAZ ~VQz( > G*PƉ9QJ6 d\{LD)0vVX(=F2!Ιbtϭn9JU9oi (ojdžR0a((cWR0a(:Q6 x?.e Cig5ZQz(Rݒ8*P1 j߷gOǍR̥RqK;Ay%Q*W*h>ENXﶮeXy~`C[ zfMBD{8ԲxH E &l ՄdpctPz; eFpO-YWoFhFdt0S#o %y]%K큩L~)җF^?wr\(/XOx96Po8CPv|uª5* xŽA酝j>r&us򊋌\?FDAz&O^yޠU5!""eW8!߳A~0w[4`܏F^1EZ\;M>x,q| 6@k7.kJ/`U^RTaeY:gTm9/ M^\V\)%U~5ѭ1\߮.wY S?tup{Y+=ny~y_ofTGǟ#R /ׄ3 aHm|H+8gQyh%@ll^QyQrFkavF9^^"Kvo .d1pC;)5qlxșBZ/䕳%vT0BEU`F,mE6U+=}1b3(2.4 cZŠ|bIK*s0(*ȁAVX5*wP7u̦cۯe 0][őX/e8+e-A9 /2 em[P d0@[-sXղ=޽_MZfaWX dI|$븏dATک< us( ̜ h1q6N3`f>JՇ!4Wg 0bS1jѼ|`sI>D6jTx8I=j׿fPBYR\N18fP'P3h 8D0eauzݴHY0 :߈nˍ|L5l-n`v!'$L\y3Y 5T@'1mIVbnW9qƔ=:{ yVGԝUS;?y0Rg$a(F33q!qw (^J9Rg5}V,9f+s%k, E(=n C)vO$JQjg]F)0R7{b\J: ͞Xr(Ձ_sRבKu`ܩF)KJY: D)KJ[0gcF)(vϵ(midG& {.#Oq4"@E)H`c!&])"y 9Pi [ǒgGc0 2R&Xz~+{롥 c&a"H@mpMˋd:Q|Hz13d|6C Vm AWN]\a1 pPބ70V8-P U\rUsKE]! PBУfS*]jΞ3jP^)9vaw7JK6N؟G-&sjY QZN޹O:׃haL)p͞TFՒcYi<𵄠i`+@Me!gqi,Jq34R.Yhld[*A{j$ٕ8!7e8 ɫFP\5RyP*Mwi@?|~Am1$9C=rxժnS+m @31c$~&@4v7TT+ SVMUZզmSʔTƚg#6Ї63ZCg攛)7S@Jf~h)͹1ϯFKcU:,Xjy^2U[NG3+mxu0-Ƀݮ{:}ٱ6P~ {T)ƺҠ%HJ[.>3v"AG])7;8u~F-/5JHf8EP;Arz~U8Q gohi1\U3 |R J`XAL\`|ͭ~Fo ):g>|g{r !X @gUR;UZWb1C`ԋ{aZN Ȩc_˙v"ʁ RLSeD3r&¶EwQ\Zl5O,| JoRBBYX~м zxh9Sl|f V(z(n7c %W|q,ur˴ZFЉ eC(ρ X"M :~:QHlY#aR)V;+eG4cRϫ$X-pecEmTiJcE.M>2:jV3wU}!oŁ zzя헛Ɨ>ov}^o֝[a]绞.8f%$=~B)":(=҈-az5aƪQKe&SHN]NoQt=Ũd|o,G#q'&K3K]G-s-d/w|q}{ !y"AeUoowdAڹ-r!rjlه=갈Emiq^c-ty4ŜL1؇} ,sD\%/i/Xr5@RTL'[ƌeK]err> c}2+ 7VR֠"ԤۈyS;KAGiə .n0e-;$gك|3\ecRв{O R.m%Rt,/mJmves|Hz~< ٛC@ 7*)jPt 4(]Yu]eM 7@7m{hޘݽ1ֽ89lmNy%)vyi F11U@ Ժdfhe`=FBh t!PnIPBsO k7BB:+$Vg}9R4 ybx/]`Y[_"BѹwKQ~dl@W*hR>'H6m81;߿l[-8#>@ v( aۡՙx{a@0Bvrm}!mrFE5_.yrVnR}rE=3%i;[ 5F AIԘxdlt?ldJ(U'O{Li=Mmuǽkj/EV8)L{G5#ȓ-MuU.SwW%ƭT 'mh-T稾lʛc5StEjɡ[NYT` %ΊG)md?Bd6h.*(D Oa9r*ϝMU$} 0%^@ȥXACȝ"4աf-˜MZj Ya{%Ptjy}IGWĖ%yVDi{lKyMjA,7"!Q+/#78Z_>ބ?¦o^~sXonaZ}"em^]o8AA=*Äŋ;;/?[7^Tو${Wg:PkkU{-f64A KgΡBϚI]LY:BW$LQwRu{4n}nhޭ |M8^K>wKA tRĻ^zlݺ@WLW1yXܫb< <D׺߇4w><켴|Vw~\d*G>e^fdc㋒_o!Q|\vylsz{a~ PN_8_Um6-hsc[n3߹q~W|yd%4pw9Ӭ)sc3831&LSOv^@ߤcn zMly[E$G<_5>Jܹ*guy@{@"YRrv=l-Hp:g<6p4IIA .w9hZȦ[NNCEU]M4ckKHF -$CFT'9E 9E U@ 46rJ4y($%h/@O RT#8Z!Ա<'湒@rʵuh-L\έ2ܥ)UٲL{SF4TKIҡ G6Wp22 f6}=lVM;`4xzbv\_@icZ9~Ǩ9c3l@)tDHef?$v,svDM xcNg.rd^ϦN?vʷEĄߤp*`G?}Y!T˭߲*R{qt^]=PJHNn3GYwƸ_p,)#r?i60[DZϏw\/1i@2vŘb \ KJ`+^Y+Nvs oI'0/q! V̷ulLCu3[5cvgkp1~C1G2;=Zvf&fӕyQ:6ằ #7 6 ]#oc_.A`B%F$hUw!ńIlFc0YTB3uH7YL`_N ËBXN1(Ӓl3W7-"6נ }) ønx^HCr1\{3t:\J!1S㺒{CvS]ƉXb w;;٬|'zi,Jr[Q5?#n@X@Pk2!@ /W'PkLm$=;MUy9k3eE*MkmQݏGM{3nvxM"M男~Yǡb9E٢)O3hLjmCE܉@@}=Z$[.wWלW%O[cY؄T~r4SsEOqAtkN2]|'>k c#6DU-RF5T2 +5Jg,WãtexźhꞭ ~xJL 'P1* MIP@xxRI ~xT@'Mv_ OS|#[!Z踯;'ލa-U1IGvSDXOҋ"[!)IPt͊9V̘+dbɊ)j+Cwtw#% Ra i:]Y#*$7cfh{TFK㝫BqV7ۧgʻZjV4u, ey$rmxnm )s w~+qs]ct}W4ć7Slѳy{%vf&c#WoM[PD76~jo}rv4t{[߭P|۪Hm,2΅}d f+; 䀈g X3 k ݴbld0hN)F9JOj2+( ,Hp:g<6p4IIA HyBwkӔa t-/a3R&'VksN>'Ͷ @ (`H0.؛(>/Ͷ( (hƕqFz社ٖgJ/(Pr"(ER,W|_z(*T_ (}Nm9Pz(e:L{{s>'U h@TE2eP7RҸ\U9Q7R3B7QRTq(r_Z<'JQšRq(e((eRj*&Q׍!JcŏkΰZ ^0JuI!B#OOK%aſlC)w:D3 Ci)5(:/lRC)2I/PJ!ŀF)q(e0@)q(-FR$ׄDWfRd!Ň ? ͖H\q:4galuX|jtm˾h?*fb?dl>}c LSxjV?IЄzjb&i?&=I} RK2+YKnn6~u w6 lcZ^|~G"I "IkNwt}:14ߧ]2e,6HҾu}V(쐢IM⋻{>R#Q%4=J.WY=w ToFoω@@F~cw; w5k-Z {ON%c5 hrϕ4g!#ӣ6z+t :]8 ƕWh1b1ԉ?\},aZBEOo|vhkc~ΆڕM ʹ6ͱz뵙Kk3yr=A pA N  eѫe[ Mk^gP> ϶D@'J槊#`1_;Gh~2N B2Lo]}lOEfeVWm Sk\C<6ڮ7,~r|X/?Ӌ{d7@i\ӣk]C}DWj;'67nyr]": \\^tۆޕnMCj'N>ā]At28$V7ݘ;FM$5(*ڗ} ~M3>9 _tIw;~ ƨn}hdܳkx}56n] CUyN ofN$׏},Z/Vz}Q6ۨQݧ<UU<6~.j\gz#7|ۈ1zǻُa-Ӊm#ĻM8:' l=[r&cSwbc:mx gV Z![-Oև6l =mjuQQ;|;IkQۋ2usn\T@`!uzɛ_ӛ7ٛMokͦF) |żˢU$?$KZ( *.J%yBRcDtE-f6`(=Rv]R1Ǔ}P(9 D/!!c]W gy&LW@+$jYJMX,PZ٣O;OҤ `^V]Hf܋^*j)L}j900z)goYsVف;j}'O_Q&X7͇C L xK +$50Kkl qA8E'4FWC^)ymc̿=:[SlsrSX*)z^[~]f7V5EL3͏4?Msjɩ\E(3êӬJ]!Q@N˴PjeHUfR)t囶O6tn̞3>ͼyx:Г|=0c/;|N%6JR2Uy9keZH4^r`HS @o{a X4P8\X܁>D b Lf)=s>Ԁ8ywgL;6= aNzY'{.X "w8h0 !Pu91}_{ 1=~;1Kq%@f8'.9B뮛( 8.9Ha'zx$.0mOv1z Ө2V:6a ҟaZ oUgN(8ԌtHӠAA6sd% Or L_wKTABDE;j5߅{POUI@ UGrh#.5鸢s`H]]j.Soȋ,Wny~5PCyBzX |b'ΫZr>Rȓ 9r)fwtbnξֵ[|)wBDclh3FX;<%-p$^mOJS$8/-pr&ڰ)ϲp@w8[23!j97RˉS|:K;a:\ǯ{wMyң4d2""rY|,WhjqZiC}aɇ--]rZpså'#ɧ7o7Ngyz˯O>\'ùo_ZfWoo#@X#-VijYU Q,5ORhA,KՒ7̉!TTͶ.StL >d(_ ~0O$\f0yG˛/t&aYϚrUR=5GBJ LôJ1MNjgVKfRR)=t݉޸Cv]Bv%N:w| |V>_C\Z09~$ skserY@H=m<ԻCV̓=ۖpG3 R޳ZAYzUk[m3 =!b+Q 8^ģ1Gb}eiK=* :B4b5ʠ,%dj9<@\q0nEPJ JR(e DUD2E B7XtEH8Dy./4*T eAM)\fY2T,*T 򬐕 EV F_d7r=o!)5k~ +Q`}Hr?p0F*rYٙZ+GXR )`s&D=7 *"DvTG$ 6 I?zu@zrab4; A+M[Z1yx/>4i0SP@f<~zt.F/G_A/gPsFx: tQA AG #'◝POO y~s J<(ŋt(a:D;K-wj-qM­W!EZDOlFw5/#9kJmfEt* ^ɂA_^,M4ʦH>Nx7,8mc116Bۄs/HFngޭ 9r)j3],8mc1ԋl#qU$1ݗ 9rlSČޭ`d}u>VGIH(7JI2絽W~IBuu)WPDPZZWGtmG]ﯡZ'8w@Qff) U1-g+e `dFX9tYܔERɒrY5ݒZ'vΩ%>5FBW aD%T.!(9%SK]0mSP9+hԋt)IByY53D@gEUYE+S8LEM3Zf)j"HxdҌϩ!r zyP,?6\uqMe㙩Sɘtw/p̟&ǭX} .B g4=PvfM\ KGSLAkmAzB S-C9j~qYL" t!$4 8jHa.zx$0micYWOv1zaҨL Ѹ"<aӟD cCrCEx<((Rsx 'jTF+ů;!0Na^BnLy IAgU%ݲ #utJUGRKz;8@^F0$k`!fgYl!|Ň>?BDlJG]wtbnξNͻWzz>,M4ʦ~׻#b116Bۄ4 n=<[r&٦8q3!>e7)kO?W>`>@8)˧OS҅ >d@O·jh8ĘXgۺjm/W7720=skŭu8kIsa}r>ۇv|>ϗ-Io?9?iU媳2?B~fv4vCvq@4?|8֞cO%G!!kB!PBDd]"BR7lGja?k͙rv#fa>kJƳjJe aRD?+]IZW+}V* cYX!ϚRxglٻHnW%eY|YAnv~wk%]iA{#ӚٯѶװWNwX$.Ain!@a/vq[V(=k*T$P & ҰlRI4,.H͵Qz(fo+˫/>'MH+ ]_Ku:3K؛ެ/_+˺e륛 g%a4lAں{2zڊ` XuGkNn5ZB$Fi߅&^eSEMHrzH㍑aJ$udwLI||O}cK⧒ezXV ~ٽC^MKeP_޽7mY +>.V.s٢ OU]?=Fs쯞8L(q2w&,PaՒ͗"OISy=xwPSkvV;pv^1-P9e1 ֍x7/ _ ^n9 \orܭ^Xg?٣6;žlDU6~ۯbḷ$y 1{Jﶙ"Ȝv VL^obw1IݚЉS!)Q=3Pm)3B@!AI'2!vTn֭2"QiZp22AшJ:q=v;A+ [t`PKR܊%( 4Y0M0EvQNA>@͍%OԗM]\=sc < 3X(I@)0bBg5JyD©D_Y(=$g7J)0G*6L$PJ2+Qz~(T\h-ZH6O@bY(nU X"Y$Vu/(f[-&@ڔ|)/_I=ylRIIdJ+ռ;orR^)L$PYJk}yT`JV;S<"O VRa(%-wj}JCi%B9L3J1Yծ?SX=aXZjM#&fńJ9qHY]%Mm)y9}XfÒ^6%5.,EM8s J~ٳY^=åDG}XU~2;4ge -eg ֍ǯv~g@kH%C,uMgZS.#Ղs,pTդ ɴ`h\<D"42Xd` HX"K 7qIoNX%`'nm4..ŵ)HH( AɄ"/2P‰LTL)dQ@H qD@2Q[WUvF빉U3-~Žu)5yMӒSUrP@泆(& ʭ.)7VY2i&A0Ld2)I$lN Wց:ԫ2rGv;]D/qw IuE]`8AsZje>A>d .6 6%ͤB4J f c"TȲr2UoPH@$u f9)C{RQF{'^p%'_}jzU|jG!|<-[R4ʽG2MLVvdN穕f:zw=Ch Kq/n_u-UĒ܍gûk("'\?}:9[Aҧ^?o榍Pχ[DBj/W?k7 ܷ//|Uf.1zYCk~l偶*7 (D0T8Yj-C`1Q(~(b""_qdl*o߳ORq14<7D`S\ARpy%B]SP˻}.ǐs cRȱȒ-"ED ] Jn3ieQX)^K@5cn z|C.-LV"1EÌV)K0`n<ʹ&s%`)2)7*,YM@|Q%^p.E291IIt(drRg0K0(.Xò݊N5;w|hUFc0@)dѨ\3km4ws^ eL&tr&2!l M(c3+r,FEjW+q'GC9=HN`)2@WiU9a4#yz%ӤtqZ}ӍWYL)i J;NvnQ18V B5"Upd` \jCsxޙnz9B Vn */;fhP-h<@ &G=pɡUn;Ќ1ڰI9Ou%݃ǿFh)o`[6l5~9M)U+4L}A\^x>f#5YKqgspN;n=w _CE Ncukϊ5OvG'cnG+H+C*D2ꝯaԐ;R9:P9z(HS+u_w;WSDxJ}DpGdq=iKMxӡp-yضV#.RBxi7J|%/*)84â?/=U\wY2hqӇݮk\U2,\HUyw< /E>Tr!z23SiSA[ϒ5/~f ]ܰ~/ii*Cnf 5!DɤѮ=Sևh=!;h'LI>ẅ)x:NwxE3ΦM ݺgz>CPL>|8)x:NwxS"49w~K!л L@L%Sa[χ)C.xPhnUZˀ*>ZަYےdqCx [-n']3rsb/L37Oy͍^]V ~_g̯:[lx}˕-mC%JO 8d3ko瓏i20SZVҊFX%+oKu|߀灶8>QJv<G&\?[#BO:tI*w{dbp=pNR_?E}fl{wԝ?bڬȮ?uu!Cլ;2}xw8:cd-ñHU;(RȘ~LҤj~$ˑs_ɋB$\hMeZ熄e@VHȜ^2J5retz1ر1vcbl77Lg>R(<}ڢr>rr3d@/gxIh`(]B&=>]XA;E C(mqm En璀g\0j) Ȼfq{,ZB]~7x5ZfEj6[ԝak0alJV}sXq(,l[jWӣԗM,RPlQ&CR_nKƖ$6QEJyUL\gQz(& ”s4f(=$eSj^qR =IQQ6Z28o*XV!O}xx[iFL &JO?^\ǫ[)i59F&|wBNn6_h};5 Z?}{rkU<*N&lQxFd2$Y)"j8,5=3y$ eepy$F-O&y}Ƈji}U>qdݶ^DL~tyDaEhEei0ȭK$h#MKYH8á2 Z uB`$iںQ0Rc! ¬DvZ$@GJ3FsX{PkDߠ#x! EK~]{F`'nx(gH2,L?.Xi*0LY5y =5j/E.!2 .aӃG}6kly4a c IPBr\\O`V5[=1@ljCS8(_XlЫ]cArJ ]Fb; 6d]tُK.`ȱ\K\uA`kڠn((9x^ i0xD ƆI "ƭ%cҔV8VR@1yHKuJ W&suw۵{JhK]vŵ#y.-v?O$wo~ޜՑ$>ϳnT,YH6p*6ߢ)\yw-Ǥ|]{J[Rgs?%_+]J/zcLA?OF ўLy|7ݔ7ˬҭY+(GV"\׻W -qZA罹+/^8Q Eog#Gq5tπdzs^L"޹s" ?6q ~2y 5kzZͩ†wmfS-ܖ$v z3bKɂX{? OK] jlԘj <[Z'[(D|n$,Do(>hrЩfGS\iPHٹskk ]6+#\/ټ2VB^)cGRy7`܎A}G6\uToޭ}]VwB^)o8׽t ޭ BL;x!w ͻ n]X+76e(t3 0?ft{+n!+R;wY}kX ՚)ۓ7wo./7W_Axo^eiϑbN]r%"1Wu!\Y(tIZO*)=Wx{:"Wxw[\W_TEEXI5F7{ &E ̶5fN3ij>4" PKoAfXcJo [ZJJJY)XҷԊJwQ}J5JVzV})RkT&w7gjY~IoϯKM$=*2KyvJfB,% G-$egfWrh&>~'lv8` *TV*tBQ%YؠT% MRIumYؠWìmPJ&ՂG. .f4K|73++|7Lh4S߫(7hI1#Zu%s57CUj\ߘεfz[AVU9>sA-qcOE\Ws^]wᘓ=:p!2Or)y͝ߔYVzQ聽+d5'%~ ϻRΖw48$HE[SS9B3a٨|?:ZLI+GTOXܒ4#N}ݜzJ!RוKn~:;q=h 会 562#tQVp+3<~lYgwQgFM .S= ~uߑRlE{<y-jw7S4ODXVHd(DL6Kb&xWcOY%xn>?eNw\ Edr"ț_`9*BÅkƦLv"_5*҇ y`ڴ1"H .:U xS- J'ضe=wpr_>bQN se <&jt (%O~=}?6R=4[>7,(D/e ?)I><0pGS ^\O`V5[+u%1R*<֕GWLWִP_0_Rm$A{ (= J3QER+?MPU`5=yXDEhYM[= HUZOie,U`k/D=/H[@ns'Q@U1p6q2cnR?hヰn5J1X#9J Cs 8a4hpO8ܠbC {f2:QcfsBs^k12yZ,7f 0TVM$"M69tv\WB{0^zRmcs5X=FA,MmiwGaͩV 9[s#Ы Ba}G^- S^^]X+7V6lkwS[[Nwn]^ڗz.,䕛h5\nUnmeb:mQǻ X*/߷wk_ݺWnmg:nvּE[6hØA1b\{A5s2yQ84 3j{lrVVV(tgTsmm҆YmqHfմ=VS'A<ǚP`m0(q9wLF'55j98P٪QoVTgJJjyI9tAބcF)^;\/XAgX crQTet4[9~j]T02=I4&rMk@j/2mh&C&0 0)L-P2Ye\!J1jc-M!wQHi.9Jj!$j3fscҁ7Lf9ZnXQD@S!X;Dkl B8&2"JWVTkMTSy|x>5D G^f?-ʮGn|ł+/1UM广-~}d=i=,lX{le]H iP0^֚t<>/eSXUL͘d@\d;{u\Tt&=\uC^2+?@A9`um1,?'ϭqEXZ&klFrpF%)g#豫йӘ x BRΘ-Z4*?.pjGaNnLwJ.?zkms(Z鄌J"9yoOt(V2aoWWPfϫ...=_?ktz6^ZͽNˌ\$< *3 sye\Axd9i+ >?0ɊM*n_~Z/XY[*B3^l"M>y&c61P_.F?Ow.F~rˏ]݅ fLců [gvz 3/%bHc~> Tnx*xx^AB0Zo䌚%biq1~bKb&O"o;tN5W+>}yӻ?IF ltO;ꄢ3J9rC[9)glt;G뇛YٺFUxPR:ִ<aLwEԊf]$. .`{\ea^lnygYů,ۍ28_ͪaFX@վo;;{>_tl"'b}^yvqW7#y1._eWX]a勇-ԩmkmx1^O-%΅PQn| dBl3(qټ:2_cDT/)MKq9ӑQ_WUT F?B%1Wwþjl_U|˨+c<3V]HJm[Si:Id"$!Q}xsKryjn M\m԰M>kqhQ c{3vDvŦ#cMߒh'UmAп àj^(; XJWsS%V9vJá+Zb#Ul_װec tĿwIwݐVR.3d`-uƴ=wӁvip/ Gy9@w b ;EʭH/1 n_TmƊpu1>vuS7o{,$ɀ~>zdT~O}.͋p_C٠e_ WE O&a0}*E._Jb >k;YS0rDB޸)?Sz7[k G6mCy͉m y&˦|D, sޭ/)mViQfn&z!,䍛hædMɱmsmSMlPig *m5*+[,T)Jֺ\?j#,M{k+Ck)oUm|?M,鑦|9|8}z(4Y(P􂝼PB.2/R8ʈc 5_&) 'u”(|j׻|}̳uDo83[=/·q,Uj!0`ӭvsKԪH73m!@4CfK;R4 ɗrPDӯѕekkԅ0&+0E sױE=Uqe&ރP@"+- &CT-,)CE&( ։RK<WW,(몐cحAb،1&bv2'oo@;/;=|E%j(>]CrZ.JxvQDcvQz3(/rY&Y?N~ Y }7`ֹ|ǫ;鬙*n֝ʾ;^ݬIv!z,$r~ -uX=o߮Y__k+5p=cI;RS<(U`K}ٔ|r氓(W׍>'eSj)8n+Ef(&RRiVzJJ1JqC+%; ŇlW?}4]:eDۤCRv~=L;ªJtܾTlG%ݝJaJ[FRju{xO\{vdNCliɯ!n,^5>d!YNTEnΩ Ҝ;s1Lh'&rP5Ky̋dg+ kkC$NRКǭG@.WA`|Uiq`=z*@Eޘߞ7B Om;ʧFvG P qeNjh4ۼr_|B*ӮMAﯯXŏaŠa.G|}u \ݞ 3X u%ɬҔHG),-tf8o* \/\(w4qڷ8J|BT*yVҎy>J@cFt*Ї /y0ol9.y)۹t^شjEtnxa[jci'4 v ^GIuhZ@ |7,7eZB+y JO B!:Bk;WRkjyVRNuZ)m;&b12[b<=2h4YwZ(ѵ$.UE~rEj!_ U3_o:ujgWD!,䍛hQbb:}ƻ1f/KnCX7^6eGލ (bb:}ƻs+n^$z!,䍛hg28K{.Qj!Ql8K/wZ)*ZOJش(aS{*[ )ud=)q2$܎RzW{Egs{.5JfE+%KaI}ٔZ2'+=j+EfXKlV2JgR[t'+=j+%LR׋}iiVZI-MT4+U6}ZiV,N9qEl[뢾y/RKN3q[)4+9pF,z8s}o\3|^˦J t9+uVU`Z;t(h[A vx J 煗tNAs!y\\` ם+Wm½_o3bXk!E9ϤQ|A6<5+w@oЫHux.,ю(hNnF[SCxvvf'J'jrHOLvߋtyt0Lːj$ tLW:CnhKmS[S=:, U*M-CZC`Ԍ|}"R{ODo4(1\iG. eN-C+eQB# ,) WyrXG_rW3W]OxR^e׏_-B=\O?M?2w_?kIbRS='6Z{6:@ -*phK؃Ր)o*:4N?/A#>C1T8l81h10&%Sn7=-}u 8nEQ~Ȼ}[>׵}+jjO䝒I5u+Þqߢ7Ba捶{B @Yi=b' :v, \̗t ^ԦgA:{V:mwB% * /;l8,i 2NPz_2dUDP\inRr=Z$^ }4\qi4jN]NΓ$ ]iJʐyUm btK \8 [*^CjG'bJ+sc e'}F*VZp Aޭ xȏt <Pf2e{7X_de\V@|]&2J!4$|eV*냡x1’N^O,Ց L% =Z /RR *4:F,\yR櫤Xe凌qZ F,<%b q$M bᅅ^WָI@+a:^9 y^L,Xe] zY8y 'c2N+NbN~c 8Uqo;ha+ٻ0 wwIf'o_pcWݳ VH޷x̧եѰ׎EjJ%}b1@kE{ BO%%<%U‚$O ,(8^t҉@}04Mn 1XgV-b|͍."'ŋ~?5g9&t];K!~NׄJz?{W#ǍJA/vJ$gz0b3camxȃ)X/UʮL2jK}2_Ae$M0-$kw?U)dv7ܻ=ڡpާs EBaQy]̘MT8pRjbNa3~l&o3Q+kj'CTmSmq$|'ay0)[<=VXg0!=U!D:̡-$?Ϳwx' êz~9{fS{/}m_}ӌ3{#[)^;wqO$(kl{y;"J䱬Uekc-wWCl~si-.q=2-8B59:u^/K`^'l {a Fkݻs29?O<(3T"qzN:KUq6bћ#r'|\HK|c*Ց'htjKve2`4q0h;sR 1T+D!Va2^CtTc &P^ڬkL nJΏCmV\6.'?߷9n ͝REnu.O }qpYS|.fND䤥}xy~5O_J"xU9? Vw6fԎ5wO<|+kn" E4F3=v#> D#H fwb$nѵ|[ E4J|ݨbcnN;Bې0{3xjhkVP{C1>OcOMR]5\;S5OO WڎN?ؐRz&ljM-R;YBv8)=LYj TNRzRJkrRJTSpvIJOQJqVCT5R'TX~#4)OJk GlRD^v==qlpRzm t_zRbbT2 )I5,R!6ma/.t8(/DsAURтZ`U9f!fvf g;n@[o߿*M:.t鄶&ۡ=kClqp~]g֥6(FOk ( {:D$~^}~}ԫYvNh /Y")Oɒ$LXTTkǒ%NPDf*QLIdB0JIk%1[jyLb !Jvg$a>4:EI0}q3SI`n6, c*ujE{jC>~xm`Q'ؖZTGEP껋= \|ZBR2se97YQEE^W>o_,R̊DYwkx1j;^ \Q- yhGѾSf ?,w寝Uz'#~L*,72+O(,Vu 9v}]m>{iع?IÇio  1DЇtR0ٌ #V G#A"+ZXie+?)45`Ni"g&@J|ij_mP:qeٶ c#C>% _DBg1R:e4uPq\0V@P@) +CFu0)K't!@+ &Y)@JA2MJd"bWPK!9t^J##/`0”Z+ ZԒP,ױKE) iTJcyIA)QA*|2!#mش:̉rn=p<+lFʇXޑCQ<VHbu_?;]꧋+iyu A㟐orwpChg.* H ?=z' 5{3Wo4Ձ!= fvv?P1$d%ImOlU|w>(0ʙQB=C{=\$9HNԸ, !FD7 /-6Ikz%5;nAQ0 _3B쨅ELh-w6 B]c[r_y-9&iqW~W710K{8 !Wb[U Ra۩6M/[R(c˿Ļzt\MFAUac[ b' '(y^wnzl?w _[; [_P:o)ܹ۷ p v6&_,{'C'0?CֲZ4SM9-v붧rMc.5'B[`jO5aѯ&՜QVA;Bs U O爷6exJz=RHW.Q2uKF(zb#:nC"̋RHW.eJc6w|vVK~8mXsͨ^i[vSqPig=jvjo_.~6Pp9- } }_fEju|@^Ju("cD;]=[_7w1 wt#mF|.YB-/8 $2b = O"On?ۭ6'-ryi(6=(W]kDKP“F.11Eُ;(+h\SkE ůH/+#3PRs"Ƭϸ#J(ƅVPjE TQ8P#t!hGǥq Bp;q L?t&EVٿDQ b_9 Κ3 ѐ ʝJ_SO&Ob[ [%[19' )`Yr{+85{dI)HԎ.?llWd"&d&sзY;쀛Sw6nnlu?H^mVk.o޾y1V)O_~h.fl3jKm ÈPҕTǀiL¬ HKY~[2ZE1J,Ua KlxS9Y>oY}t`F >3I= 88׃[?69GvCӖd*1{j0)ńl-IcXH4`bO*pp~Ɣ7 qiCJ.9%bw'p $7,sO!Ֆܿ?1DZ`\{(ӷ?{17> Źs+V(]<I" p)pnUTXie:'BJKlv6?2Krvf#8-g h#J7g݌e(_/nJxW\sd dࣳI42bu r$wT2u-FmZYT^VzO?TӲ BB9{ jtw0)yOARA-\avqm?})zvQRT<- uρ 6= psl!͑ąH c͍T.,yu>r5‡ DwLs_-cՍ[/ׁC{zfDح0 >m210}sԄ>6B}s.&R%vlwg PDyb1*^_%:4-zvQ|lws&8e 8YSt /-ijb@O]ew;]~ζ~4Wo׿m,z֤QESּ<13c(]ѰsnQnJz-SlT^ rTW~?jX b6y异gw<^zCѹyz x9.Lp˼cӋ 1cS{Ĭq{LDisz}0HaICrAbUɤ<sB:4RG84!@N}6'`4MP幕s+\MPB"B*T  n m+9aR3Ѕz7A}QTؗf$7eOq|%U|;90>o>mp[fW4Z&Og$VwaU,>ENjCJnܟ1ZJO&-ʻݖYo3Vk:=j^l}|b忭|BHNgg ,ÝB[!$ F rvV$%B97 `4")d^HFP MiD|wH@tF("LRL'W1SN"a !\*MnEYSLHSJ|4:-jIJZn%s]sE^>=*-ʿT姿KJsTGO~W?Wj徻+5' $;}wK{0qw|^k|/g{\rJrhdR_ȊoZ C*%;3m ,q y)afgXB[_jqn-M Ymڶt}$-3_ixoY?d7d_Ιg>/2 !p!qJN6Oh;߷ދ\~~ Y俄s qe 9Ί A<9牼ooW@ l]p^<c-X]768ՉksJtY*\I&_8(&iSS~%oxD-c5KXS:*xPKC6`'5%Y,8< ;k~r߇`*xUp!BP̈́xmajyC8]7?~ӈ_"ZZ[&W*՝AG`bӽCy4:E|nz%`fsW+{L!◝@WpAHCzv?B7 UX+bjK26M˾RM[ BNeyhu dj?F^PIdtӟ*:˲04Yn?*=wwgw٢b+5]NfNJ&/3{v019_Xv aT?3ͅy~SW?l~>MiyN.*.}VKWѯ:Wg_y͔!)y7buo4n=t[y”O7Qbuo4n.zئ؟wtݺ@L JŮ}紇w"y~7 NIâdݗ:};6ړUZPgֻG{_jd[+<]W3[84mUgO)nn!+lkIh@@E#UGϡ\Y=[MvRUNs:)l-قUʑ6Sy@˴ȥgw9Cv7׳ G̟ yoCw2ޑxXBjQg67ww௄M_KܳsXo&^֓O(pVMaq`]r{M_(K`9$qX~Iہ? > 5{&|c kr1 gx*.he:-.ha Mtpb4jjGVZA$0A#]+12ZRPJBa8,&2f*ɖRG+;\u- {rGI JI}.>PDsS!7jUj[ZJ P £Tq }R_J(%:-ю PڎZZJhPR~(P& ,H'5JPJ4 h&=|}'ܮ jCXGeVo!k^e~θ";>uH,!1hCbT)7KLRc`t1eѽ#:#x~ JP`l5y$atYZq.Lc^s)ɠm1;U2 ym-#ok^qv2ru>w8Av|5 RAm+x$߮ c{2=u;\[m)Sc;(v 0N~M{n<"ݢy6K1MBPH* dTB(,o 4%Ԓ$ؙX=2mǔicMiMd/P*UPfmS4Κ(DSuN)*H9*mz!1!i6Z0:8'ժԖjN-G~R'Pvvt1Sg8< kU)3`L4xM4P9fD|wh_AFx! sSXJfJu5~:M]TLy3mK1"R(5%mo(v[15EgvDӘ'@C[ĦƖE"n2jj4!YtPJP3S2.”dCv Fo`ƞR)x[}^ Z:>Z-{ :XtT`0-bxt vYh.K?%K}.ZJx!d'^[l0$zkwhP8ɲ&P^3e-^oi+wFtο/^#kET[Em@Zŝ6Z5?R15k{>FբRvv|*7jxajه*5G+_O xw+np,j9jd/'\Vڳh+q)Z_6Sx3ba($Cf<ǣ%O?oR(ZߤP(Q{]~(LX)IK|P:D|cMmMߵF=QAF3a/| >:⸃ >JNp 7 w/`UsL2Ɖœq0T~bژ415ƪ?@UnG\G%7/ SK{j^ZEY, OnbtҼEGxCp7M+U\f))loFox7'k-V!ЉF6`6:wZݺ@h @3MFR[B&mUӢ-tw!okRRS+n+Dv+|׭ [N(_h];Vw=NeXHRzjȐ(% C)U=5 GR}!sޢyK(/ՆԖN%ǍRa(E@(|~֥qPz( K")Whq('պHO(=j C)Z'1JM ίC^_U}^*ңF0ŞT@0VRK:`P ډ0 C)/ Fa(UvAcFҰ)8- Rd[zNc2e[- (uE!s)aeF6˴QH%BVZ=7im!}~&ן煟xiO?Ps򩸿u/bkVvq`)yd&Y2<М[1g?q>xN7w :E_'<>qH=k(~m7%6< 0l} @4ͪ-T-C'A)yO2ȥ5ZTɜ?ԆmdF80KUg+{nEflrw}ot ]9ٞlwSl}s].4``n[#[)Bɥhm $k VJ^oAS\.jh^(6u_. K(r LjPgEfRt+3.TRis[ :OJ*]EYB@V V.[mg+_jUV#gdx/kxo^\p@䟬HV7ߔeĥZ~<>s㬨}/ w?ϟ*i6 mgR(m,jxFƌգQ]_do֭X(ՠ3:nC`{ hT[}{ V"%Bw73?2.W-qu嗿~zzߪ(Z_O??Z*,cxg?򛯢vҌܟϿ~ٷhm|G!Uv*Lykcd Ih.ܬڽ̫ ~[I S[VcfRo,DGurox_l'Jg.Adr@{n1}+C~Yiw,-B d8 ޡ㐁*sb % YF50{0W$ruADo6 Gv}XGTSPO9 j0+.p&氉j0Ḻr9oo|\ో3Nj@:3W0]y]QGZ W8 H6< vupͶsr!}RΩM{ahJΦH:B D| v ]WDBؖ8مKM֍JOrK7z-qZ7>l\Zj59w%'s*!i߈/:j%Ky7)>eڲ8OoE7i̩y'~#q~Tln8\.8߁qM)>s;SnN7bۘ7[օqmS"1'sخ>N7dwz +vj9dtjWCgx=Z׉sOq Sux|*K!O pri8<Tlٟ&ZGL 7|c-c_X` ANGIǞr6X ~|,WGm_XD%kIm,^j0-+bK)`aF R\r{)b.88yyi`J47pY|Ͼz~a/6UK‰N.z'd*0Ii,g56c%EmPmwelb5 7J*h#f9th@1SfawG*`h:J PBB\aI`JeUZ!pgq$\-M f'lJ*5*a efK%$P0I9{DV&a,T(Ԓo jqH_AP܍K"8ΕgЯc4 XhaV\QeNwЕ 8\>e JCѡv s4OLP:šN$juNn ؠA䜀wR0:Z#" ͜yK-eiJRJD녤A /.f#%"\yRDe>/֥֌ʦl⁋_R60gvLx^1p(X% nS֞f2m~l:9ן.'ܥ&а샶'b40D Ӄe.]9V9f& +RK8eǝ H6Z_jii3C3pMBr&J+YNQۃްshINfm`i#Nr #w Sa3Պ%b0cJ@_q0~3L-ܕw TEQ'b/ Sq3֪06 /D9@u05h8i`Sug)}{Jr$ _7t]A!o"ىK-- dK-{?B^ /ey9^\ .daRK6g(baO-F*H |7b ' s@.<䍻hO. 9SGE7I3薊A~#& h-O$Y49aF.<䍻hO)1s;iEh*!6*t~u.0u!oE7|*N4; zG34S= P9w9icvuYVP0X ΐr\lj AwڻUY6R{7ĶYvqa,km9Fi1eIx> E4#ĥSG$Z1 k1DDyg+ei;io e_󏋪n Ŵ9wѲ^o9SWYߒ{./ ߲0 n[N0b4:4,ituf&@=锅td#du8420QtٍyNC۠W4te.H 7| &~"j3~O8;B)&&Jq!D7mCbfɫҠ+nd!V*q*![3oUTS'mACInQ,Rvؒ7WL=x(?S5`zJ[oHL sКbBXZkQΥg~ E`6h^2p`i5.0Y5]OY63hs`{7lʐd8pi8,;A xߗ2N7PBYU^}aQSvkS:l!lui,F]Q?10rS qQ/ :&^n߬wե-k>a@QXhrJxqm+ ,8hLeZ277IP}A&ڑY`-8)QHse?{}L.Fiqe) F mŐmԶn|5yOe)P/m#M$Ȩ7CNs:A )( 57NKAA:QV=*0Ypf&ڷ+[D+K2A8SB|{:p5: 9'Cg ژp6VUQd,/re,09R:Ei-L`eʄyl%( Cq2mW$1$Q46TUfleY #הK: 1t{Rvb2Inĕ8J F#`EBt,:UyI0ӍҊ\VMj(3h߳Ou5s 89=x3 ]cJ(yDVYqY-"9j"u+)TBJWW>f[#=C-:`" Ϯ#IkλH;"/*( ĸ|sܤឝg F$+ޜDA kH_:w##'Qh47XerOԑQKZLu(D)@,3<FřYgT[Ly<4/BwVO-Z_?=Ie,.⇧B %>ŏ~o~ࣛLaV߮?g蒕˻a~zr Ǯi6̍^g>|$I ]e?֞.Y|_t?E??^_yvЙ\಻E&\=_? oվ``A'5rmw)9h HǾu GWd~TH*Undy_ϱ*TeuTQrthɤM;3jBhBulߴW0 ҃4,޼S-ӻrlwELJLJO~fToGT>|\Z@Ss?q_ mz!eRmZ361nIstW %!ت\),B.DQ 6)];Ve9g}t8 ,3.㵗߲͟rյg37Mֻ]n޿{Tp;.}AFrbS&+gmmrc'V-Ünb8/^Y#E֚,Ǘ8@ˍo`R)B.xte0b< !tn,cHh?`O>#"2WV`\ZvЉp71nM0 .QеkםMq| 5~]Eb(+ R˃dӷZ0^ŖCJF\^^j\؛JFPO;K^j;ҍQv(I|,ؒZ:h`IʼnYuwl#|Vtv8J-_n|RguもwqxU.y.k.Jog'N卽_'?Lf~65|I,aIcštg<:L8p3" Hρ6؇)yw"MBtK uR˾ZԾ[-օqMgzF_Cm Ļ88;`20xi:YSM6EQRS͛3qD_UWW7*5Bn6>wp#i?I-bӻa!߸mS.o~"Ώ>w{ r2+cm'EUj* rO:{r6WH+SySW]:oEW]&\p-̐QxدV p9xǺУPU(]Wx6_q(7m AQM/ <@*$Yj+$yvWH rWDy~f? Aq5emͯZk!=hZ fa}3GR 8*ejuqO5UʹL;mʀսk^V*iT20TY$18-;AkGԎ4nÕG vfJUry=s]o:#C O]Ʀj`mTN YlGKAk.$%=^jF%6PB).h_X826R+S&ܻؒQd b&9uSw"<̩kBq )GFW &Pz^-X7nA6%#mJzP |L'MۀYE{n=[hŦMzhy_=j!pPsʱG_-тؓΈ+RgtUk$d'mLY)+:P3R&JOJYbTtVOꋪ4T׊s6LJK}QJOJRRQX)~VZHMVӶR~VʊqR~VgSR4'|VʹRQ꟬Th?+X%*5zZOJ5X~/RFd'm~}c/: +3!&+=E+eJ)!FaJMNݰOJNuGQXߩ^0OJҹ3lF޿SdD٧G Q?nf|Uh9,}x'2nfWxSV>Sq^WzU\6{᧏v9n"큆PTQUE._*ÃTIJz¶Jxk(9K8!T,*Ws\ΒR!Ў땈(H) 麎DZxfJKj)w`3E^Ղ/~/k ^WQܥJz}BC0hx%Fᢖgw ~o>bk\0wMy\K R6]XjEMS"n3iF(ub$ 73 n^ O/ˇlmS#=]]bmSk7ǹ\x{N8p眸MpFSؾ$lQ/_0fld5(O#UN7i[꥚8M57;u#R~ȆquOS~JTGOm1OJH*aڌӎv`iΠ3k^JPR._%z >HJ[ybeTc-t;]qe9i_;?D H$|N p:G_*"r-KGJ4H_2qsx~Zo ?Gy\ E'-Ͼ%qisr'lu.]:,[>Äk|cPXYNlJ$1~%J%$["y $I3V~qX~9!f'\x)֜Oy?h'mΣ:7~>>u7> <( :Ղ2a'$΅IsŲ*œԈ2PYg-}"gjKwkjis98* R\,3iT"c 2⍩6ojЧ\o AN*Q)O+V駇˛xi/] \NIn^_=˞VX,okɼ|~A/?tcqw8YG?^ߺ D:߽A/x\ ~Jgp^s?.5N?<_.ׯCqSȜs#DhӀ `?xb3D'Ds̏6]g#0\빗^Aw>꣺q~oNt6qDlӱ{)2|f z-8i`(gxab4%&1LqjT=?;kX_8ˮMӳЀ߾9/nܨfe⺹Ve-A4NyRs'5sN5!j~"5ʳ,Ɯ8 EET&6;Ig;D[QmUPFUh2Xve@z,a69LtRrF`Ym~As# tWA_"}e>ߜl?,U|ʃ7(MUF7Pt 2\p,!<']it}#U/˾*k8U# (i.+"O(GfҦ"Q3N+ܼJӽrf@{?zЎgfdGܪӄi ŇTd$2*s AcGҕ\&RJRhSCs+&Q\3@LRD0j2aE 4*yj5>NJ-@*@ǯS@xR8~;NXb BB΀(МyGTnֳ)((0?MŔ$w[-ݶ㜴+jJ4).+o74 FroPΙ"uO^p EpcAoyȴn@ j&w5Hee/P^oQf*not#Aq GE-?`bMnx5{}FEQFC8p! ~D[>dJa<6J wJL>w>^T.8wuB6M z8YamLšӭhAǍ|B1/jL||||YVrKyFh,5ץ9874n}US\R 3MjYz7}PPЇmڠ˃)oy/@7E9:t́+y*3M Xܰ \f$SI$q9Er#-PΕ.D1k*gD 'h%1$PR)fD qsB%({*7`8(@cwV$ӄInhp:R68Qz17Ǥ19dQw]?EsM׿ս_5Å! 8ݺ\&QPu%Z}7O4W'Q]ͭ> SAYsg PM!?m)z539|co r1dԅ FCnmGV΍! pz&Eq1*ũQFk%Z)/rq(4!|;+7 #C-GIRCޯ?[Q94x&A u^z}RT0 CV"?$3N0׈Dbv<4m8i6<~lzׄX >lf׏1MMfUh*Ԝp W~D|S3DR&ƀ]B NF,O 7?i.2Ec[@H a8C(8s "u\q-M _Q[&V%f[ZDP#:Wl$*~fO;)z㓅kô|9i@K.nڻ\$!И)1MCr4&V=pڙEO!UHй'~)?[]Gz{=@6 gӑo#@ޘy^KD=9]7|dо\w.m^r 5jp<Ե\^`Tn׵ dղܪӄzYwCDns4!GB3 ?/Gr4@v,൛d{}_rRe ݧZh j,Vi$gQOZZ>'ޯ[ރ-єq|-r;P6n SV9)ٲ#U+IsR䇟|V?s*OQVD?<ŏuiɷh?N[O?nՏ[>Vʋ#?~_&؉GW} 8PjzKs1 wE2Z imYAA5^K{9E'g%m! Q -¼*_6[}b6#%(R>#A E(Uʓޟo,z/Z?SN@\>\-/](L)aRѱݛYFZZ>Wf&IjlkE7"K+Oe~EzҐfQ&!enQj Qѵ;jⱔ^MBTuٶ2>k/4$ا"B%^E*Vٽ:,GqXt]rW"I/X@^l"@ e,`@#CFa3c#ዬ!jbʔ/ |0CYٴݭN찰S ckFFvhX m6%2&wou;f5VgMæw.}8>ip.bY͡,'WEUFGaf#H^ }XJYf_{h0XY'HDū@Rعi铗밹@l ./bdٱw:_2 5Wv,2GH&ZsR!؋,RIG1dxW6bZ< ~]EY7Cnk9@L2B<ۍG(Hr (9[8P 6x,0mp"`O؝}ak+!TmdBN-:ivZ2ϦCoڳ\ZX}pɚUtڋ> l܃k.:쎚x,eJyv^7L1ۍ2 +x.}^yAV[l}.vgM`?vjUnVbekhO"G: xiᘙ҆Q%7Kq\=,w}="z)rVتbJ̵`=;"_-Hw dBdr!H'3ĆacizuxhUCrW3y'.Kϋ@֑[UM94L *^|'{B4X SG#&Z۲(ˠD]`V*4"xJU֚&KM>,Mo?Z0lE-O s#E3\of+%־JiEl/mq}2(5RBpٕʫ *In <RHq}DHd,1*VD>|jNFX{ulPTKƢ:Ea ZBDc04F͘ZNv|e[iP@"bu@@+un Nֱ8CPDX UAEX][oG+6mau69Y *QHS5"%Z9LD[ _U׭ | +6[LLӊ<5̲&x>؝ 0Aÿy2e<Ր])!-*%-W?D-slm[[1bILQ&U!HdR \4e¢(#(GoAa/KoCo-іHcgI:C8/-xo(8A -'y_f Z]dǗɏT@^U˄JNI7)OCig)_ҏQ`8Y*$FIr1rkJ(ERhyOau-ĭ])Ndb+$ҢQ%D Yb ѩVpLU-mU {.T*@,-^@VL gW-LѰA/ &%7D(:J XgKM1[ -;LvRzpV*@ R50YH"RE ^[,xGC%I/tD`LG\&}4?ˆeca#OuR9eb.)Yl4 =XȠ`.zzCܝK?} ,[w7$7כ+X&sRD[-^۪,3(U9X|@ouGJ4Hz ƕm[P Ź!d=I!Q%R1׷+gޛřo5or~2'Ђo@W!Vc9KmX |o߃~`JpVo2Ky,:~K*)RBЕjFJle@XvdCVLb@Czle` ŅA}|\O[t{AgZ.2k `2Ѻ^2/g \2`dehX',}ǨcpҜU{ƊƂh=qR`z`T:z`9|ugMe/`ó*B7 jD)^_ƴؽ޲lZeqimTX[e;Zt4Ƞ!1ACOq+]JYn/;%=<خҢ;aB#`!rI FbFck-G{|՝Qag|(W^4De%,^U+w pWrś ֓+,Ff=s>kZ2;g߻xmqӣ;mlz9=tfq8!_0I.iN]|$X(}8مvy5>o"@~}dXS']J+7xMh.A:-{ɟe'ݍ#/'k%gk>rx\i۹Z& 0j-wK\LոrikKAA }6jV ̕:JzTގ6Эœ0r[P0Ʉ\)9waw 2KzlWȱT%w Aٳ5ɃOK 4 %z{*_"42-Bи WKhR*Bg0-gPD5zKN)/§2SPݹlGTqjDb:JvDŅ~%R)FQ9k~P (R'ՆI(b1JE/"1`m.()' .GP}`mIiqJyx'ZIKHE9PL&"ѓ a3㾲)dJVdй(.ud/ZO6D)َ=T;r;0rllf^p.aZԺ8 XWLX4ݜv|g;apD8v-]o_ۛȲj8_գ QurRFÛjyf4|q+Oë[٬a`~o8LA~]ݼ{ڟno?1 ;M{Qԟ&Y<6R1DdnEyڸիk|fqvѓز^jl4.@̜FJXd6΢ډ`cN|6|Qoa~iN3v1)Zfv1]m^xOěӔv]/ko^##Z`F(ǝDN/|hr-m/:]_lN<ƬT~FAnӤ/OE*yEW.x!ջR|MOk|kuߏSztZт3# (%RD6 xi*[=-9:laIw;yџ|Bw"Y:VD 7Bz3?7?(W+} 35o:AMuGƬmAR "KN 0 C։f_ }.32چd @EuIS0>2j ,K9^^gDљaEճ*KE}2I Z0f23H Z˨ac=ɹp_R\Sh:X0jh`:<1U4̕,XV;QQ JU;q5<*l5qYQB uL#;i~~)a[5]j qf]{߼^}kZhw/O!ISn.#맙Utd:[2|0gwZݗ;'gR$wOg,PexT>㍖4u>Xlx LJ pJF4FEug i}4=?:n~tV&PZVL2.L~)s{(U ? æOxlC_ilvI[UP素&&2d,)XW3(#/hᕴ.rR\~<@-D$qG$+zuPӚ'R @'j)IkEurR4/@BHI|!-\1pZ5ќLGB(K$n{rJfr !x8nR_ v"'!>fbQ:*ZlD!XL6!63ǶHGsBôN6V ! s)4/AiN\ cPX)_9mVKs7#LSEVP%)˸zxig;NvhyN{Th偯'5@VZ};6;tyÆnE% ^EMr-q !܉>^+7mi5(fJ ;@RqmVhv2zU{Zpn}X59OlbVʡmXwzZﷂVgI6+,}IH}V[8SS]q GCVLCR:S5@R h%:#8~2Äb[|4s leYXjÕAӱ֒k>n~dޞ,2HClΌ82MՓ*;:^Q9[&szHZвN!ˠDO&^~[ݶ%%?ʹx{ydق ~{nգmOOus4y+űxjCio\՝sm]Q] ͨ[ y~]\\I乮ڶYX{\ٖwipLx酱>@ }$ 'NnN ANǝ#qk!VM:UmwJICgiy0jvRAu[6\3ž(˵T%acWm4?ʂ(͡8M_u nlMnU[e53LmXlwZ,ߓI ~2|ҽvBsw[N;3?h6}:mތPԉw;G{j6$䕋hL96q>ni}>[( |D'u>%Vn><[EL)̘,b;}9a|b"+=gup`ώjFɋū$FŻאYekqvXaLSv찚W'+ntϠ@Л6ZKQhÏI}h>5I*YZr g XDbM9,#4R3lCW.p?{jja'%=XĞXmXd]lM w-Q, _?/e^P 7~5Sdֳ@9,{d\XI|}b|;XQ}~ -xيgՂl%b /?>ك5^{c.i+5W%\7e#pWp:ހۥr/J>wVnpB(dz[HЄf1G`3k%ifu!\ i4yHK3- Y D" `b7V؂SDQaf=I3#pR qjx(uTRzxR*NW@{REt;'P+RzRbf- nz\*Qg߄ )xj[Et*IJ.Ӡ0֋-<!wǐ_(_Ŝ 8lFq|RJFAÛ׵Li.k9WT;Ѯp*T7ty C ;Օ-S(58 &S2!7$Eҙ8ٗ@[z" GDaEhK!. j7V6[N L^',K80LN"EIO 0"T(v;D1E8:MHsȀ6ʓjD"μߐD/j̈́ȘH$\T,(Mb))Pjf@ H k)1. 8@1*!2@Tkueɦhcm|^Pnq/ P qMV&&䫙m5Ӧ~$lz\x{ I)C#*;pZDpS+,MK!ZdYFL`MDbtb*Hbc+$!px後Z0Z=G>jZ!OWU(^sk,)3)O1V JX^[&"IDIKF됨8e!4T\]a"ۥ=|秝EAPD DsHkIr1NŰSt7g'Px膊!39lLJڌX wHGId0f\կ <R-| 0Z-')q|[ W?8i,%h&Ƅ5!עi=*"39ٳ7[ 3b֙?al-}:xtE/'! vg5^_kɑ Sh2PaFSp VIxwQFkn0axj*8j(G)VN^FN43Y(ؓWIܾ/W^ʮ+폥/N?D7^7Rv6#Fx.#l54ѻ3h@(O`Hw7T"+xx\ $!B%*~bF x_'Q̬>KD{RD>t9h7U6 %gÒO'YQ٨=d䚉>oZ|C*|q=s.hx=n%nxsm=s.V zjn83&8}1#*tt]VNf'h7u&h&Pԛ)AT;$@MvɎסns= w BV^ỈýڐW.!2Eh`PN;hDSyg-ZS!!\DdimA톑Ӽj->:m'Sݝv /nmH+2Ÿ&5- Ebz8~0&YKDLq#"Mb2,*Tlt$ 05w"޸̯׵žT`(IzpNp-Dܾ;/h7zť z NcϖPzH[1ւr?vVvVC^I4[yyN58GnkzOP@3%_'7BH;ܺ%o6cCߺ'?ŬU/oj!QJZJ9R=9xj=!Kl-.՞-PK()SJ2Y sQeBE%(hHLD8MSI2 DbA Ԙ2wL>6Psqmt=aAh//[@M?&R'T1ཐR-9J!J)'~R$Bd/?)͡ƔuK)Eg[~݈“~rH)s nKĆ$ߒo+3}M&&wd6$i!9:ݨR31J?zaF8)|Rʋ uo;-;x5qp 卹y=S> ͮnoy{C)[s<4f ;"~q|F~'C׆F)-[U6S%Ga;>%%NY" 9~sB9#Srk!-Ca hb LhHx6BQ6NH1!sR2" &;HYox$SyfZp9ys.p0ɮ_XM:-YksKv0*¯0ӉC)7c݄QHhz,yWl7O-|ӷ"er[N,aso@&$D AzS#RQdõ/蟡;'%Bqro{?=s2fϵǪs=:ɶxDV՘GGC1JwĶlM1ly/-?WWF"{b̞aj7)U;9K*l.aO $&$WP~ܸX_]%~ɗE?lS4H^hQ&׉Y:xb6~1_}i%JKWHwCƑ 72͸mbNbM²((s!cj(Xl>|9E>vPN V=b]/o,K>{"^n+},ZwR88RU 2)ND Z26AJs"20!*OK[E0y 2Np)vnA/mҞ~]h:dhYPQz>EԆG, n\c~LO f#Χ&usvCO@À]CVްv$͗ݾ-(-\o 0*ܜ} ႟&SWz*GZqɺ'EFz-D@U6 /Gw۲n'aLC3PסKvL=X)lĹUh/GҤho5m o ힾfQ![kZ3Q^o]zP+U{&k,=lipqgOKHt ߩhkVXiwF5\s5\~65\\ٮZ1]g.gX FEqT"!4nhT6%Nwvog J^>LuϏUN8tpoi0[ۮ .Hj8{GB&E{I̟@BbEh{4"=&m-5j|.'M&-P jњ?yS?Inh݄A͍A8TnC2Uؠ#|Nmmsb[+voFu2 Qk4^ʫou&k(%POPJ~jwQfoxX 5gk>/ RO3"ҿ6[K_Khp?{y9pu?avar/home/core/zuul-output/logs/kubelet.log0000644000000000000000006322501415137212562017704 0ustar rootrootJan 30 16:54:21 crc systemd[1]: Starting Kubernetes Kubelet... Jan 30 16:54:22 crc restorecon[4592]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 30 16:54:22 crc restorecon[4592]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 30 16:54:23 crc kubenswrapper[4712]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:54:23 crc kubenswrapper[4712]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 30 16:54:23 crc kubenswrapper[4712]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:54:23 crc kubenswrapper[4712]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:54:23 crc kubenswrapper[4712]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 16:54:23 crc kubenswrapper[4712]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.568442 4712 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573260 4712 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573285 4712 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573293 4712 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573299 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573304 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573312 4712 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573318 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573324 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573329 4712 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573334 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573339 4712 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573356 4712 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573362 4712 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573367 4712 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573372 4712 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573377 4712 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573383 4712 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573389 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573394 4712 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573399 4712 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573406 4712 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573411 4712 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573416 4712 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573421 4712 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573428 4712 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573434 4712 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573440 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573446 4712 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573451 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573456 4712 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573461 4712 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573467 4712 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573472 4712 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573477 4712 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573483 4712 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573488 4712 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573496 4712 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573503 4712 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573510 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573516 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573522 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573529 4712 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573534 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573540 4712 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573546 4712 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573553 4712 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573561 4712 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573568 4712 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573575 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573582 4712 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573588 4712 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573595 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573601 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573607 4712 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573615 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573622 4712 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573628 4712 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573634 4712 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573640 4712 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573650 4712 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573659 4712 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573667 4712 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573675 4712 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573681 4712 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573688 4712 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573695 4712 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573701 4712 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573710 4712 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573717 4712 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573724 4712 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.573730 4712 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575425 4712 flags.go:64] FLAG: --address="0.0.0.0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575450 4712 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575462 4712 flags.go:64] FLAG: --anonymous-auth="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575471 4712 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575480 4712 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575486 4712 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575495 4712 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575502 4712 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575509 4712 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575515 4712 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575522 4712 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575528 4712 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575534 4712 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575540 4712 flags.go:64] FLAG: --cgroup-root="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575547 4712 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575553 4712 flags.go:64] FLAG: --client-ca-file="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575559 4712 flags.go:64] FLAG: --cloud-config="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575566 4712 flags.go:64] FLAG: --cloud-provider="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575572 4712 flags.go:64] FLAG: --cluster-dns="[]" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575579 4712 flags.go:64] FLAG: --cluster-domain="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575585 4712 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575591 4712 flags.go:64] FLAG: --config-dir="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575597 4712 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575604 4712 flags.go:64] FLAG: --container-log-max-files="5" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575612 4712 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575618 4712 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575624 4712 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575631 4712 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575637 4712 flags.go:64] FLAG: --contention-profiling="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575643 4712 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575650 4712 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575656 4712 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575663 4712 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575670 4712 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575677 4712 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575683 4712 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575689 4712 flags.go:64] FLAG: --enable-load-reader="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575695 4712 flags.go:64] FLAG: --enable-server="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575701 4712 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575710 4712 flags.go:64] FLAG: --event-burst="100" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575716 4712 flags.go:64] FLAG: --event-qps="50" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575722 4712 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575728 4712 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575734 4712 flags.go:64] FLAG: --eviction-hard="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575742 4712 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575750 4712 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575756 4712 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575762 4712 flags.go:64] FLAG: --eviction-soft="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575768 4712 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575774 4712 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575780 4712 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575786 4712 flags.go:64] FLAG: --experimental-mounter-path="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575810 4712 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575844 4712 flags.go:64] FLAG: --fail-swap-on="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575850 4712 flags.go:64] FLAG: --feature-gates="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575857 4712 flags.go:64] FLAG: --file-check-frequency="20s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575864 4712 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575870 4712 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575876 4712 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575882 4712 flags.go:64] FLAG: --healthz-port="10248" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575889 4712 flags.go:64] FLAG: --help="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575895 4712 flags.go:64] FLAG: --hostname-override="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575901 4712 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575907 4712 flags.go:64] FLAG: --http-check-frequency="20s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575913 4712 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575919 4712 flags.go:64] FLAG: --image-credential-provider-config="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575925 4712 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575931 4712 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575939 4712 flags.go:64] FLAG: --image-service-endpoint="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575945 4712 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575951 4712 flags.go:64] FLAG: --kube-api-burst="100" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575957 4712 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575964 4712 flags.go:64] FLAG: --kube-api-qps="50" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575970 4712 flags.go:64] FLAG: --kube-reserved="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575976 4712 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575981 4712 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575988 4712 flags.go:64] FLAG: --kubelet-cgroups="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.575994 4712 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576001 4712 flags.go:64] FLAG: --lock-file="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576006 4712 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576012 4712 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576018 4712 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576028 4712 flags.go:64] FLAG: --log-json-split-stream="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576034 4712 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576040 4712 flags.go:64] FLAG: --log-text-split-stream="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576046 4712 flags.go:64] FLAG: --logging-format="text" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576052 4712 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576059 4712 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576065 4712 flags.go:64] FLAG: --manifest-url="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576071 4712 flags.go:64] FLAG: --manifest-url-header="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576079 4712 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576085 4712 flags.go:64] FLAG: --max-open-files="1000000" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576093 4712 flags.go:64] FLAG: --max-pods="110" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576099 4712 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576105 4712 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576112 4712 flags.go:64] FLAG: --memory-manager-policy="None" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576117 4712 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576157 4712 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576163 4712 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576169 4712 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576184 4712 flags.go:64] FLAG: --node-status-max-images="50" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576190 4712 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576196 4712 flags.go:64] FLAG: --oom-score-adj="-999" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576203 4712 flags.go:64] FLAG: --pod-cidr="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576209 4712 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576219 4712 flags.go:64] FLAG: --pod-manifest-path="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576225 4712 flags.go:64] FLAG: --pod-max-pids="-1" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576231 4712 flags.go:64] FLAG: --pods-per-core="0" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576237 4712 flags.go:64] FLAG: --port="10250" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576245 4712 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576252 4712 flags.go:64] FLAG: --provider-id="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576259 4712 flags.go:64] FLAG: --qos-reserved="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576266 4712 flags.go:64] FLAG: --read-only-port="10255" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576272 4712 flags.go:64] FLAG: --register-node="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576278 4712 flags.go:64] FLAG: --register-schedulable="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576284 4712 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576294 4712 flags.go:64] FLAG: --registry-burst="10" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576301 4712 flags.go:64] FLAG: --registry-qps="5" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576307 4712 flags.go:64] FLAG: --reserved-cpus="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576312 4712 flags.go:64] FLAG: --reserved-memory="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576320 4712 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576326 4712 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576332 4712 flags.go:64] FLAG: --rotate-certificates="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576338 4712 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576344 4712 flags.go:64] FLAG: --runonce="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576350 4712 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576356 4712 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576362 4712 flags.go:64] FLAG: --seccomp-default="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576368 4712 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576374 4712 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576381 4712 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576387 4712 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576393 4712 flags.go:64] FLAG: --storage-driver-password="root" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576399 4712 flags.go:64] FLAG: --storage-driver-secure="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576405 4712 flags.go:64] FLAG: --storage-driver-table="stats" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576411 4712 flags.go:64] FLAG: --storage-driver-user="root" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576417 4712 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576423 4712 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576430 4712 flags.go:64] FLAG: --system-cgroups="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576436 4712 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576446 4712 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576452 4712 flags.go:64] FLAG: --tls-cert-file="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576459 4712 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576466 4712 flags.go:64] FLAG: --tls-min-version="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576472 4712 flags.go:64] FLAG: --tls-private-key-file="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576478 4712 flags.go:64] FLAG: --topology-manager-policy="none" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576484 4712 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576490 4712 flags.go:64] FLAG: --topology-manager-scope="container" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576496 4712 flags.go:64] FLAG: --v="2" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576504 4712 flags.go:64] FLAG: --version="false" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576512 4712 flags.go:64] FLAG: --vmodule="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576519 4712 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.576526 4712 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576666 4712 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576673 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576679 4712 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576685 4712 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576690 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576696 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576701 4712 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576707 4712 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576712 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576717 4712 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576722 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576727 4712 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576732 4712 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576738 4712 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576743 4712 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576748 4712 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576755 4712 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576763 4712 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576770 4712 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576777 4712 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576783 4712 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576790 4712 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576815 4712 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576822 4712 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576829 4712 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576835 4712 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576841 4712 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576847 4712 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576853 4712 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576858 4712 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576863 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576868 4712 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576874 4712 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576879 4712 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576885 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576890 4712 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576895 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576900 4712 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576906 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576911 4712 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576916 4712 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576921 4712 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576926 4712 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576932 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576937 4712 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576942 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576947 4712 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576952 4712 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576957 4712 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576963 4712 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576968 4712 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576973 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576979 4712 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576993 4712 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.576999 4712 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577004 4712 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577009 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577014 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577020 4712 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577026 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577031 4712 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577036 4712 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577042 4712 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577047 4712 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577052 4712 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577057 4712 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577063 4712 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577068 4712 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577073 4712 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577079 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.577085 4712 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.577899 4712 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.587923 4712 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.587970 4712 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588055 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588064 4712 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588069 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588074 4712 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588080 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588085 4712 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588091 4712 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588101 4712 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588107 4712 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588112 4712 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588117 4712 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588122 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588126 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588175 4712 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588181 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588186 4712 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588190 4712 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588194 4712 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588199 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588204 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588209 4712 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588213 4712 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588218 4712 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588223 4712 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588227 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588232 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588237 4712 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588242 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588247 4712 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588251 4712 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588256 4712 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588261 4712 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588268 4712 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588274 4712 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588279 4712 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588285 4712 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588290 4712 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588294 4712 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588299 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588304 4712 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588311 4712 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588317 4712 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588322 4712 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588327 4712 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588332 4712 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588336 4712 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588341 4712 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588345 4712 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588350 4712 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588354 4712 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588360 4712 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588365 4712 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588369 4712 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588374 4712 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588378 4712 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588383 4712 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588387 4712 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588392 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588396 4712 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588400 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588405 4712 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588409 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588416 4712 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588421 4712 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588425 4712 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588430 4712 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588434 4712 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588439 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588444 4712 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588448 4712 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588454 4712 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.588464 4712 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588640 4712 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588652 4712 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588657 4712 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588662 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588667 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588672 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588677 4712 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588682 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588686 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588691 4712 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588696 4712 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588702 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588707 4712 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588712 4712 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588717 4712 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588722 4712 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588726 4712 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588731 4712 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588736 4712 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588742 4712 feature_gate.go:330] unrecognized feature gate: Example Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588748 4712 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588754 4712 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588760 4712 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588764 4712 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588769 4712 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588773 4712 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588778 4712 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588782 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588786 4712 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588807 4712 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588812 4712 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588816 4712 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588820 4712 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588825 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588830 4712 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588834 4712 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588860 4712 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588865 4712 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588869 4712 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588873 4712 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588878 4712 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588882 4712 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588886 4712 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588890 4712 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588895 4712 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588899 4712 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588904 4712 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588908 4712 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588912 4712 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588916 4712 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588923 4712 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588927 4712 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588931 4712 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588936 4712 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588940 4712 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588946 4712 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588951 4712 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588956 4712 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588960 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588965 4712 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588969 4712 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588973 4712 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588978 4712 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588982 4712 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588986 4712 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588990 4712 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588994 4712 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.588999 4712 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.589004 4712 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.589008 4712 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.589012 4712 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.589019 4712 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.591673 4712 server.go:940] "Client rotation is on, will bootstrap in background" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.595936 4712 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.596064 4712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.598854 4712 server.go:997] "Starting client certificate rotation" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.598879 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.599055 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-23 15:02:59.9127767 +0000 UTC Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.599117 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.626636 4712 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.628371 4712 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.630480 4712 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.649764 4712 log.go:25] "Validated CRI v1 runtime API" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.682051 4712 log.go:25] "Validated CRI v1 image API" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.684621 4712 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.689178 4712 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-30-16-49-21-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.689216 4712 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.701233 4712 manager.go:217] Machine: {Timestamp:2026-01-30 16:54:23.698303332 +0000 UTC m=+0.605312821 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25199480832 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:096c9b47-6024-413f-8880-1431e038a7d7 BootID:186bde97-c593-497a-8d99-0cd60600c22e Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:2519945216 Type:vfs Inodes:615221 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:3076108 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12599738368 Type:vfs Inodes:3076108 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039898624 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:12599742464 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a2:09:78 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a2:09:78 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:8e:9e:a0 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:f0:2e:0c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:fa:11:83 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:85:e6:1a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:de:b4:91:f1:2c:eb Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fa:31:91:86:d2:2b Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:25199480832 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.701432 4712 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.701653 4712 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.702924 4712 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.703140 4712 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.703179 4712 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.703404 4712 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.703416 4712 container_manager_linux.go:303] "Creating device plugin manager" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.704021 4712 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.704052 4712 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.704284 4712 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.704410 4712 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.708519 4712 kubelet.go:418] "Attempting to sync node with API server" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.708545 4712 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.708569 4712 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.708583 4712 kubelet.go:324] "Adding apiserver pod source" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.708593 4712 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.711989 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.712072 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.712413 4712 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.712844 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.712994 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.713590 4712 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.715842 4712 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718207 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718242 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718254 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718263 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718281 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718293 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718303 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718326 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718360 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718371 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718387 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.718397 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.719328 4712 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.720779 4712 server.go:1280] "Started kubelet" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.721122 4712 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.721976 4712 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.722130 4712 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 16:54:23 crc systemd[1]: Started Kubernetes Kubelet. Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.723408 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.723448 4712 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.722478 4712 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.723620 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 13:37:15.327320363 +0000 UTC Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.724108 4712 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.724138 4712 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.724324 4712 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.724370 4712 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.725023 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.725110 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.725361 4712 factory.go:55] Registering systemd factory Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.725381 4712 factory.go:221] Registration of the systemd container factory successfully Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.725669 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.725972 4712 factory.go:153] Registering CRI-O factory Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.725994 4712 factory.go:221] Registration of the crio container factory successfully Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.726057 4712 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.726308 4712 factory.go:103] Registering Raw factory Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.726326 4712 manager.go:1196] Started watching for new ooms in manager Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.728088 4712 server.go:460] "Adding debug handlers to kubelet server" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.728246 4712 manager.go:319] Starting recovery of all containers Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.735095 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.735140 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.735151 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737336 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737348 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737361 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737408 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737424 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737439 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737451 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737462 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737722 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.737741 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.737210 4712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f9081545b4911 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:54:23.720343825 +0000 UTC m=+0.627353304,LastTimestamp:2026-01-30 16:54:23.720343825 +0000 UTC m=+0.627353304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.741218 4712 manager.go:324] Recovery completed Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.743024 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744204 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744328 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744363 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744411 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744551 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744574 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744604 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744626 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744663 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744877 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744911 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.744949 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745174 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745212 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745239 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745264 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745283 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745435 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745461 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745503 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745523 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745543 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745668 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745700 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745726 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745746 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745765 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.745980 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746012 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746049 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746068 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746101 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746223 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746245 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746270 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746289 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746310 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746383 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746520 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746573 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746609 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746666 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746706 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.746740 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751256 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751293 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751330 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751351 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751365 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751395 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751412 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751426 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751443 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751454 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751485 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751501 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751514 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751528 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751559 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751572 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751590 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751602 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751632 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751649 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751661 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751678 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751690 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751722 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751737 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751748 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751764 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751811 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751832 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751851 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751888 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751909 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751923 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751934 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751965 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751976 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751989 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.751999 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752010 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752045 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752062 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752120 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752203 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752238 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752255 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.752291 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.754158 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.754188 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756694 4712 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756736 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756759 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756775 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756857 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756878 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756893 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756907 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756923 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756936 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756963 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.756976 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757000 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757014 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757028 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757040 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757052 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757065 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757101 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757116 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757129 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757142 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757154 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757166 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757179 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757191 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757203 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757216 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757229 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757242 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757254 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757281 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757292 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757306 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757330 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757356 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757369 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757383 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757395 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757408 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757419 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757432 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757444 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757454 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757466 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757478 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757491 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757502 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757513 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757524 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757535 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757548 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757561 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757584 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757598 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757611 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757622 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757632 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757645 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757667 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757682 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757707 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757719 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757732 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757744 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757755 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757766 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757777 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757809 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757824 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757839 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757865 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757877 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757888 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757901 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757913 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757929 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757943 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757957 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757975 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.757988 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758004 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758017 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758033 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758050 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758064 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758078 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758092 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758107 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758120 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758133 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758147 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758160 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758173 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758187 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758200 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758217 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758231 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758246 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758262 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758276 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758291 4712 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758303 4712 reconstruct.go:97] "Volume reconstruction finished" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.758312 4712 reconciler.go:26] "Reconciler: start to sync state" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.759348 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.761558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.761612 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.761622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.762675 4712 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.762696 4712 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.762715 4712 state_mem.go:36] "Initialized new in-memory state store" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.796502 4712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.796878 4712 policy_none.go:49] "None policy: Start" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.798096 4712 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.798137 4712 state_mem.go:35] "Initializing new in-memory state store" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.798270 4712 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.798344 4712 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.798378 4712 kubelet.go:2335] "Starting kubelet main sync loop" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.798468 4712 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 16:54:23 crc kubenswrapper[4712]: W0130 16:54:23.800279 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.800347 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.825221 4712 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.850870 4712 manager.go:334] "Starting Device Plugin manager" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851019 4712 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851043 4712 server.go:79] "Starting device plugin registration server" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851532 4712 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851546 4712 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851771 4712 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851860 4712 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.851869 4712 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.858622 4712 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.899453 4712 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.899520 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.901076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.901106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.901134 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.901264 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.901457 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.901545 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902091 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902109 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902211 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902290 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902317 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902893 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902941 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.902951 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.903122 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.903178 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.903195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.903439 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.903522 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.903549 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904616 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904635 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904644 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904721 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904771 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.904817 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.905054 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.905071 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.905359 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.905391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.905405 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906159 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906229 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906268 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906239 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906310 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906282 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906620 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.906655 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.907583 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.907622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.907638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.927041 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.952446 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.953871 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.953921 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.953934 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.953961 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:23 crc kubenswrapper[4712]: E0130 16:54:23.954387 4712 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960620 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960665 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960691 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960712 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960732 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960756 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960817 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960846 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960879 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960902 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960917 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960958 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960973 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:23 crc kubenswrapper[4712]: I0130 16:54:23.960994 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061714 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061776 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061823 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061843 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061865 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061892 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061913 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061934 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061956 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061977 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.061997 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062017 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062038 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062058 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062079 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062454 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062502 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062464 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062508 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062537 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062462 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062583 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062611 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062610 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062621 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062629 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062646 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062633 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062654 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.062678 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.155280 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.159665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.159701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.159709 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.159737 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:24 crc kubenswrapper[4712]: E0130 16:54:24.160125 4712 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.241522 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.247698 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.265677 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.281245 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.287653 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:24 crc kubenswrapper[4712]: W0130 16:54:24.294413 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-f5b9e40c487613bfddd0c3e4499bcbd97c334ed260594812353a2e647ed38932 WatchSource:0}: Error finding container f5b9e40c487613bfddd0c3e4499bcbd97c334ed260594812353a2e647ed38932: Status 404 returned error can't find the container with id f5b9e40c487613bfddd0c3e4499bcbd97c334ed260594812353a2e647ed38932 Jan 30 16:54:24 crc kubenswrapper[4712]: W0130 16:54:24.295080 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-87da9e4667258bc8b090a8558c0073c624830a9a13ffc2bbfb2a5584661605af WatchSource:0}: Error finding container 87da9e4667258bc8b090a8558c0073c624830a9a13ffc2bbfb2a5584661605af: Status 404 returned error can't find the container with id 87da9e4667258bc8b090a8558c0073c624830a9a13ffc2bbfb2a5584661605af Jan 30 16:54:24 crc kubenswrapper[4712]: W0130 16:54:24.302932 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-714e46ee76a1401c219cd2fa288ddf590c50d4dd41c51e4040521c4d95e8ca4a WatchSource:0}: Error finding container 714e46ee76a1401c219cd2fa288ddf590c50d4dd41c51e4040521c4d95e8ca4a: Status 404 returned error can't find the container with id 714e46ee76a1401c219cd2fa288ddf590c50d4dd41c51e4040521c4d95e8ca4a Jan 30 16:54:24 crc kubenswrapper[4712]: W0130 16:54:24.306163 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-36e511a90b95de127c02cb273445536635f72eb7a74c805da95a66365a9329b5 WatchSource:0}: Error finding container 36e511a90b95de127c02cb273445536635f72eb7a74c805da95a66365a9329b5: Status 404 returned error can't find the container with id 36e511a90b95de127c02cb273445536635f72eb7a74c805da95a66365a9329b5 Jan 30 16:54:24 crc kubenswrapper[4712]: E0130 16:54:24.327872 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.561689 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.565348 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.565390 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.565399 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.565422 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:24 crc kubenswrapper[4712]: E0130 16:54:24.566083 4712 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.721896 4712 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.723969 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 07:02:23.644925548 +0000 UTC Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.802763 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"36e511a90b95de127c02cb273445536635f72eb7a74c805da95a66365a9329b5"} Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.803829 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"60c9703ba0c422ecadf9379293d1f832810040fb4b8508e20a1cd97f60122f02"} Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.805605 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"714e46ee76a1401c219cd2fa288ddf590c50d4dd41c51e4040521c4d95e8ca4a"} Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.806976 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f5b9e40c487613bfddd0c3e4499bcbd97c334ed260594812353a2e647ed38932"} Jan 30 16:54:24 crc kubenswrapper[4712]: I0130 16:54:24.807983 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"87da9e4667258bc8b090a8558c0073c624830a9a13ffc2bbfb2a5584661605af"} Jan 30 16:54:24 crc kubenswrapper[4712]: W0130 16:54:24.865462 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:24 crc kubenswrapper[4712]: E0130 16:54:24.865561 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:24 crc kubenswrapper[4712]: W0130 16:54:24.958161 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:24 crc kubenswrapper[4712]: E0130 16:54:24.958227 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:25 crc kubenswrapper[4712]: W0130 16:54:25.014275 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:25 crc kubenswrapper[4712]: E0130 16:54:25.014339 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:25 crc kubenswrapper[4712]: E0130 16:54:25.129154 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Jan 30 16:54:25 crc kubenswrapper[4712]: W0130 16:54:25.170613 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:25 crc kubenswrapper[4712]: E0130 16:54:25.170725 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.366449 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.367333 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.367365 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.367373 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.367391 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:25 crc kubenswrapper[4712]: E0130 16:54:25.367831 4712 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.722885 4712 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.724104 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:21:26.514505675 +0000 UTC Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.770630 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:54:25 crc kubenswrapper[4712]: E0130 16:54:25.771895 4712 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.812196 4712 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8" exitCode=0 Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.812288 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.815771 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.818299 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.818343 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.818353 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.820177 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.820191 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.820289 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.820308 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.820323 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.821364 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.821408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.821426 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.823445 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77" exitCode=0 Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.823533 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.823708 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.825662 4712 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991" exitCode=0 Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.825720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.825737 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.825764 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.825784 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.825963 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827786 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827870 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827744 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827923 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827831 4712 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="c9cf1694ebc230620e715e416388ffe9e9224ba48349257de31e4f68c535b99b" exitCode=0 Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.827863 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"c9cf1694ebc230620e715e416388ffe9e9224ba48349257de31e4f68c535b99b"} Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.829024 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.829059 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.829075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.830042 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.830079 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:25 crc kubenswrapper[4712]: I0130 16:54:25.830090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:25 crc kubenswrapper[4712]: E0130 16:54:25.979396 4712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188f9081545b4911 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:54:23.720343825 +0000 UTC m=+0.627353304,LastTimestamp:2026-01-30 16:54:23.720343825 +0000 UTC m=+0.627353304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:54:26 crc kubenswrapper[4712]: W0130 16:54:26.502575 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:26 crc kubenswrapper[4712]: E0130 16:54:26.502666 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.722214 4712 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.724259 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 00:54:49.513342566 +0000 UTC Jan 30 16:54:26 crc kubenswrapper[4712]: E0130 16:54:26.729880 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.833185 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.833253 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.833267 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.833276 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.835711 4712 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd" exitCode=0 Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.835768 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.835789 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.836784 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.836831 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.836840 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.837995 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"7a77d9ecb01962b110c243f6cbe7afa7e35ff46587ae5f521e5c0b7d833fe84e"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.838078 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.838671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.838716 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.838725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.843125 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.843822 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.844187 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.844241 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.844254 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65"} Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.844651 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.844681 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.844691 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.845378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.845392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.845400 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.968057 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.969102 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.969137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.969149 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:26 crc kubenswrapper[4712]: I0130 16:54:26.969176 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:26 crc kubenswrapper[4712]: E0130 16:54:26.969621 4712 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.246:6443: connect: connection refused" node="crc" Jan 30 16:54:27 crc kubenswrapper[4712]: W0130 16:54:27.138265 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.246:6443: connect: connection refused Jan 30 16:54:27 crc kubenswrapper[4712]: E0130 16:54:27.138359 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.246:6443: connect: connection refused" logger="UnhandledError" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.724645 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 03:09:08.956198923 +0000 UTC Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.847759 4712 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16" exitCode=0 Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.847875 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.847832 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16"} Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.849015 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.849047 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.849059 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.850702 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851162 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851340 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882"} Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851369 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851477 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851589 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851620 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851631 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851731 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851754 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.851765 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.852421 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.852438 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:27 crc kubenswrapper[4712]: I0130 16:54:27.852449 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.132821 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.220556 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.221032 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.224691 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.224736 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.224754 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.226602 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.725574 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 02:41:34.454222257 +0000 UTC Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.860775 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3"} Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.860850 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21"} Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.860869 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7"} Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.860883 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c"} Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.860899 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba"} Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.861026 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.861049 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.861066 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.861034 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.861026 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862430 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862450 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862487 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862500 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862471 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862459 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862611 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862666 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.862724 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:28 crc kubenswrapper[4712]: I0130 16:54:28.863022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.726148 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 10:00:10.773167302 +0000 UTC Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.863132 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.863137 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.864457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.864488 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.864520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.864534 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.864496 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.864621 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:29 crc kubenswrapper[4712]: I0130 16:54:29.943204 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.169785 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.171213 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.171274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.171286 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.171312 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.457876 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.726937 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 19:18:14.656025181 +0000 UTC Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.864618 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.865552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.865594 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:30 crc kubenswrapper[4712]: I0130 16:54:30.865603 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:31 crc kubenswrapper[4712]: I0130 16:54:31.585429 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:31 crc kubenswrapper[4712]: I0130 16:54:31.585577 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:31 crc kubenswrapper[4712]: I0130 16:54:31.586662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:31 crc kubenswrapper[4712]: I0130 16:54:31.586712 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:31 crc kubenswrapper[4712]: I0130 16:54:31.586723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:31 crc kubenswrapper[4712]: I0130 16:54:31.728047 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:33:55.070928052 +0000 UTC Jan 30 16:54:32 crc kubenswrapper[4712]: I0130 16:54:32.434193 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 30 16:54:32 crc kubenswrapper[4712]: I0130 16:54:32.434395 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:32 crc kubenswrapper[4712]: I0130 16:54:32.435743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:32 crc kubenswrapper[4712]: I0130 16:54:32.435781 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:32 crc kubenswrapper[4712]: I0130 16:54:32.435814 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:32 crc kubenswrapper[4712]: I0130 16:54:32.728955 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 22:32:42.092974021 +0000 UTC Jan 30 16:54:33 crc kubenswrapper[4712]: I0130 16:54:33.729210 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 00:24:53.900837505 +0000 UTC Jan 30 16:54:33 crc kubenswrapper[4712]: E0130 16:54:33.858870 4712 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.455348 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.455530 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.456845 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.456903 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.456919 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.460466 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.730101 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 23:35:08.684642222 +0000 UTC Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.887401 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.888726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.888839 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.888870 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:34 crc kubenswrapper[4712]: I0130 16:54:34.940392 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.561354 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.561595 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.562909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.562943 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.562955 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.730921 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 11:42:35.592253642 +0000 UTC Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.889861 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.890943 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.891056 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:35 crc kubenswrapper[4712]: I0130 16:54:35.891156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:36 crc kubenswrapper[4712]: I0130 16:54:36.731667 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 21:09:30.708418011 +0000 UTC Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.722865 4712 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.732191 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 10:33:45.988627188 +0000 UTC Jan 30 16:54:37 crc kubenswrapper[4712]: W0130 16:54:37.751516 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.751596 4712 trace.go:236] Trace[650137214]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:54:27.749) (total time: 10001ms): Jan 30 16:54:37 crc kubenswrapper[4712]: Trace[650137214]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:54:37.751) Jan 30 16:54:37 crc kubenswrapper[4712]: Trace[650137214]: [10.00169796s] [10.00169796s] END Jan 30 16:54:37 crc kubenswrapper[4712]: E0130 16:54:37.751620 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.894898 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.896334 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882" exitCode=255 Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.896364 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882"} Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.896480 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.897221 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.897242 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.897250 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.897707 4712 scope.go:117] "RemoveContainer" containerID="b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882" Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.941129 4712 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:54:37 crc kubenswrapper[4712]: I0130 16:54:37.941212 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:54:38 crc kubenswrapper[4712]: W0130 16:54:38.236695 4712 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.236819 4712 trace.go:236] Trace[1387899642]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:54:28.235) (total time: 10001ms): Jan 30 16:54:38 crc kubenswrapper[4712]: Trace[1387899642]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:54:38.236) Jan 30 16:54:38 crc kubenswrapper[4712]: Trace[1387899642]: [10.001428063s] [10.001428063s] END Jan 30 16:54:38 crc kubenswrapper[4712]: E0130 16:54:38.236844 4712 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.334239 4712 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.334315 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.342363 4712 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.342415 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.732728 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:39:35.368888422 +0000 UTC Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.902482 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.905163 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e"} Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.905655 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.907587 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.907623 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:38 crc kubenswrapper[4712]: I0130 16:54:38.907636 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:39 crc kubenswrapper[4712]: I0130 16:54:39.733772 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:30:04.809681792 +0000 UTC Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.465472 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.465586 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.465711 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.466572 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.466672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.466702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.470271 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.734911 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 10:13:36.220306267 +0000 UTC Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.909698 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.910995 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.911081 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:40 crc kubenswrapper[4712]: I0130 16:54:40.911095 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:41 crc kubenswrapper[4712]: I0130 16:54:41.691790 4712 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:54:41 crc kubenswrapper[4712]: I0130 16:54:41.736070 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:11:17.137827762 +0000 UTC Jan 30 16:54:41 crc kubenswrapper[4712]: I0130 16:54:41.913301 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:41 crc kubenswrapper[4712]: I0130 16:54:41.915099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:41 crc kubenswrapper[4712]: I0130 16:54:41.915206 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:41 crc kubenswrapper[4712]: I0130 16:54:41.915230 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.581562 4712 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.720218 4712 apiserver.go:52] "Watching apiserver" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.732251 4712 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.732782 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.733887 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.733891 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.734069 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.734732 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:42 crc kubenswrapper[4712]: E0130 16:54:42.734750 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.734899 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.734984 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:42 crc kubenswrapper[4712]: E0130 16:54:42.735034 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:42 crc kubenswrapper[4712]: E0130 16:54:42.735237 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.736890 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 16:18:37.086307252 +0000 UTC Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738225 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738348 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738528 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738562 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738747 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738997 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.739124 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.738248 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.741939 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.770224 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.781276 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.794889 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.803090 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.812724 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.820260 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.825459 4712 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.829300 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.837374 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.844024 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.850578 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:42 crc kubenswrapper[4712]: I0130 16:54:42.857688 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.336921 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.341727 4712 trace.go:236] Trace[1844064340]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:54:32.919) (total time: 10422ms): Jan 30 16:54:43 crc kubenswrapper[4712]: Trace[1844064340]: ---"Objects listed" error: 10422ms (16:54:43.341) Jan 30 16:54:43 crc kubenswrapper[4712]: Trace[1844064340]: [10.422086355s] [10.422086355s] END Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.341753 4712 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.344010 4712 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.344128 4712 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.346494 4712 trace.go:236] Trace[1075064605]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (30-Jan-2026 16:54:30.945) (total time: 12400ms): Jan 30 16:54:43 crc kubenswrapper[4712]: Trace[1075064605]: ---"Objects listed" error: 12400ms (16:54:43.346) Jan 30 16:54:43 crc kubenswrapper[4712]: Trace[1075064605]: [12.400818437s] [12.400818437s] END Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.346527 4712 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.349904 4712 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444554 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444611 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444641 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444664 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444687 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444708 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444730 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444750 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444769 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444788 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444873 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444898 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444917 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444938 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444960 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444982 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.444993 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445007 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445072 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445094 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445115 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445133 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445151 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445167 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445187 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445203 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445221 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445322 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445339 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445359 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445376 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445393 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445408 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445423 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445439 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445454 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445472 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445490 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445508 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445524 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445539 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445554 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445570 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445585 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445600 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445616 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445634 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445650 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445664 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445683 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445699 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445716 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445730 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445745 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445762 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445780 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445808 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445824 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445839 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445857 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445873 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445143 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445206 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445989 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445215 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445258 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445271 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445294 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445447 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445463 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445475 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445544 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445661 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445924 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445930 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445992 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.446028 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:54:43.946013772 +0000 UTC m=+20.853023231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446136 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446206 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446217 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446309 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446349 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446379 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446399 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446665 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.445936 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446770 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446881 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446911 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446937 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446965 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.446991 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447061 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447085 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447088 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447135 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447156 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447174 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447194 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447210 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447255 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447273 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447297 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447307 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447476 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447606 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447620 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447660 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447783 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.448519 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.448569 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.448678 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.448813 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.448924 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449160 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449187 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449219 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449500 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449510 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449530 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449568 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449860 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.449899 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450003 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450176 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450182 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450286 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450337 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451838 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450564 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450780 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450809 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.450831 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451039 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451063 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451125 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451130 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451145 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451321 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.451398 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.447315 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452092 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452099 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452156 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452146 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452238 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452244 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452305 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452332 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452380 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452405 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452431 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452456 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452471 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452478 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452598 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452618 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452637 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452655 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452691 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452708 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452711 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452724 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452738 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452756 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452779 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452818 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452835 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452852 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452882 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452893 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452979 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.452987 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453009 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453037 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453063 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453087 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453099 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453124 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453146 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453169 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453190 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453190 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453214 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453232 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453242 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453267 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453288 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453308 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453331 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453353 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453375 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453398 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453420 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453442 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453466 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453493 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453515 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453538 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453560 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453583 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453608 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453632 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453654 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453677 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453701 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453723 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453744 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453767 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453814 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453838 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453860 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453883 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453904 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453938 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455162 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455203 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455230 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455256 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455280 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455314 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455340 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455584 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455612 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455638 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455775 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455817 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455843 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455869 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455893 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455916 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455939 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455962 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455985 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456008 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456032 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456083 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456143 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456170 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456194 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456217 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456242 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456266 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456295 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456644 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456670 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456695 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456719 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456745 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456776 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456822 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456847 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456870 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456893 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456918 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456941 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456965 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456993 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457018 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457042 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457066 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457089 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453374 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453377 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453411 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453465 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453566 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453626 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453759 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453828 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453842 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.453978 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.454196 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.454195 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.454320 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.454334 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.454514 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.454664 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.455845 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456267 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457261 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456282 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.456482 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457049 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.457869 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458097 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458132 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458159 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458183 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458206 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458230 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458253 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458305 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458333 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458360 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458385 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458410 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458438 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458468 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458494 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458517 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458546 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458569 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458591 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458615 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458640 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458724 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458740 4712 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458755 4712 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458768 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458783 4712 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458817 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458830 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458844 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458858 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458870 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458882 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458896 4712 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458909 4712 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458922 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458935 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458948 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458961 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458973 4712 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458986 4712 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459000 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459109 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459123 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459136 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459149 4712 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459162 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459175 4712 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459188 4712 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459200 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459216 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459230 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459272 4712 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459285 4712 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459297 4712 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459310 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459323 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459335 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459348 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459359 4712 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459372 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459386 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459402 4712 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459414 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459426 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459439 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459451 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459462 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459474 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459486 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459498 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459510 4712 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459522 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459534 4712 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459546 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459558 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459571 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459584 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459596 4712 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459608 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459621 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459633 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459645 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459662 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459675 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459686 4712 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459699 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459712 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459724 4712 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459736 4712 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459747 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459760 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459775 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459788 4712 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459819 4712 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459832 4712 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459858 4712 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459873 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459888 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459900 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459912 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459925 4712 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459938 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459950 4712 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459962 4712 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459974 4712 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.459987 4712 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460000 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460012 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460024 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460036 4712 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460049 4712 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460063 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460076 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460088 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460103 4712 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460138 4712 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460151 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.463688 4712 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458129 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.464351 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.464641 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.464904 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465146 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465173 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458343 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458395 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458563 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458698 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458730 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458841 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460255 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.460814 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465289 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461101 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461186 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461204 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461422 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461753 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461814 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461831 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.461876 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.462117 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.462128 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.462244 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.462442 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.462990 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.463064 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.463245 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.463444 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.463576 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.463616 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.463652 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465322 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465556 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465583 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.458138 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.465987 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.466807 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.466823 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.466951 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.466964 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.467039 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.467307 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.467308 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.468080 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:43.96746824 +0000 UTC m=+20.874477709 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.468114 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.468618 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.468691 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.468790 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.468888 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485170 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485456 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485126 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485360 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485502 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485608 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485630 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485642 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485692 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:43.98567621 +0000 UTC m=+20.892685679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485720 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485863 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485880 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485890 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.485914 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:43.985907376 +0000 UTC m=+20.892916835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.485988 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.486105 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:43.986064709 +0000 UTC m=+20.893074178 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.486184 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.486345 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.486360 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.486541 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.486999 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.487054 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.487353 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.487519 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488081 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488171 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488208 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488266 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488340 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488697 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488842 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.488886 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.490184 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.490340 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.490712 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.490722 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.491054 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.491329 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.491454 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.491661 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.491966 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.492773 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.492953 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.493292 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.493319 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.493475 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.493680 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.493782 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.494090 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.494591 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.494966 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.495179 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.495369 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.495995 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.496088 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.497406 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.497560 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.497662 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.497914 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.500071 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.500173 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.503527 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.523449 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.532313 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.535613 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.538861 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560857 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560896 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560951 4712 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560966 4712 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560975 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560984 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.560992 4712 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561000 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561008 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561016 4712 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561024 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561032 4712 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561040 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561047 4712 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561055 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561063 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561070 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561080 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561088 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561096 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561104 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561114 4712 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561122 4712 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561130 4712 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561138 4712 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561147 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561154 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561164 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561173 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561181 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561190 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561198 4712 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561209 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561216 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561224 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561232 4712 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561240 4712 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561248 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561256 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561264 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561272 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561281 4712 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561289 4712 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561298 4712 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561305 4712 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561313 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561321 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561329 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561337 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561344 4712 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561353 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561361 4712 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561371 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561379 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561387 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561394 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561402 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561410 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561419 4712 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561427 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561435 4712 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561442 4712 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561450 4712 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561457 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561466 4712 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561474 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561481 4712 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561489 4712 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561497 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561505 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561513 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561521 4712 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561529 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561537 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561545 4712 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561552 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561559 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561567 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561575 4712 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561582 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561589 4712 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561597 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561605 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561613 4712 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561621 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561629 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561637 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561646 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561653 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561661 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561669 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561676 4712 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561684 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561692 4712 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561700 4712 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561708 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561715 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561723 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561731 4712 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561738 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561842 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.561844 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.655656 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 30 16:54:43 crc kubenswrapper[4712]: W0130 16:54:43.668632 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-ca6b5bd6b163048ee430576709f2622c50fa699ba886a1f1ba74927b0cb213af WatchSource:0}: Error finding container ca6b5bd6b163048ee430576709f2622c50fa699ba886a1f1ba74927b0cb213af: Status 404 returned error can't find the container with id ca6b5bd6b163048ee430576709f2622c50fa699ba886a1f1ba74927b0cb213af Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.669288 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.675821 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.741880 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 10:48:48.132108559 +0000 UTC Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.805477 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.806159 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.807658 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.809548 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.812892 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.813483 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.814153 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.818286 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.819036 4712 csr.go:261] certificate signing request csr-lfrlx is approved, waiting to be issued Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.819088 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.819280 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.820350 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.820913 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.822028 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.822598 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.823922 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.826318 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.827091 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.828325 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.828924 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.835401 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.836258 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.836759 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.838191 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.838582 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.842045 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.842432 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.843552 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.844301 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.845299 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.845893 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.846306 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.847226 4712 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.847323 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.850531 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.850860 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.851579 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.851975 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.854031 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.854291 4712 csr.go:257] certificate signing request csr-lfrlx is issued Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.856947 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.857460 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.858501 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.859492 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.864437 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.865301 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.865767 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.866548 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.868381 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.869091 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.869868 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.873068 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.874725 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.876546 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.877877 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.878032 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.879258 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.885244 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.886300 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.887696 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.909655 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.921534 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"1b2b19686bda20ec5431d55748fcf77bf46000f90e5737f9ee5bf4c1075f8b90"} Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.922547 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ca6b5bd6b163048ee430576709f2622c50fa699ba886a1f1ba74927b0cb213af"} Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.924497 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"20df0d2325eb95bfebcb9cd926a71006d16e079d9c40565a7a3c4761b0c30774"} Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.936312 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:43 crc kubenswrapper[4712]: I0130 16:54:43.963721 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:43 crc kubenswrapper[4712]: E0130 16:54:43.963884 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:54:44.963868735 +0000 UTC m=+21.870878204 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.064545 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.064597 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.064616 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.064634 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064739 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064772 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064816 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:45.064784615 +0000 UTC m=+21.971794084 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064820 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064841 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064837 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064876 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064886 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064908 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064864 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:45.064858007 +0000 UTC m=+21.971867476 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064954 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:45.064937549 +0000 UTC m=+21.971947008 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.064967 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:45.064960849 +0000 UTC m=+21.971970308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.339940 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-2mlzr"] Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.340178 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-9vnxv"] Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.340305 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-dwnd7"] Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.340550 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.340878 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.341178 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.342439 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.342930 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.342935 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.343742 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.343757 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.344030 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.344084 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.344131 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.344283 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.344285 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.344402 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.345519 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.345628 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.353290 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.363694 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368325 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-os-release\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368386 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75ff6334-72a0-4748-bba6-0efb493c8033-proxy-tls\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368425 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dcd71c7c-942c-4c29-969e-45d946f356c8-cni-binary-copy\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368449 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-socket-dir-parent\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368470 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-daemon-config\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368519 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sfdn\" (UniqueName: \"kubernetes.io/projected/dcd71c7c-942c-4c29-969e-45d946f356c8-kube-api-access-8sfdn\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368538 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-cni-bin\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368557 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-cni-multus\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368595 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-conf-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368615 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1bac1dc0-d552-4864-b805-fc92981ae4c0-hosts-file\") pod \"node-resolver-2mlzr\" (UID: \"1bac1dc0-d552-4864-b805-fc92981ae4c0\") " pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368670 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-cni-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368689 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75ff6334-72a0-4748-bba6-0efb493c8033-rootfs\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368707 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-netns\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368742 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75ff6334-72a0-4748-bba6-0efb493c8033-mcd-auth-proxy-config\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368763 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-system-cni-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368783 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-cnibin\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368851 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-k8s-cni-cncf-io\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368902 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-kubelet\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368922 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-etc-kubernetes\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-hostroot\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.368979 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5zwb\" (UniqueName: \"kubernetes.io/projected/75ff6334-72a0-4748-bba6-0efb493c8033-kube-api-access-c5zwb\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.369034 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtwpm\" (UniqueName: \"kubernetes.io/projected/1bac1dc0-d552-4864-b805-fc92981ae4c0-kube-api-access-gtwpm\") pod \"node-resolver-2mlzr\" (UID: \"1bac1dc0-d552-4864-b805-fc92981ae4c0\") " pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.369070 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-multus-certs\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.376955 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.393020 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.414225 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.440927 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.456228 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.468490 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469737 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75ff6334-72a0-4748-bba6-0efb493c8033-mcd-auth-proxy-config\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469776 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-system-cni-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469811 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-cnibin\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469830 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-k8s-cni-cncf-io\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469855 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-kubelet\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469869 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-etc-kubernetes\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469887 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-hostroot\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469905 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5zwb\" (UniqueName: \"kubernetes.io/projected/75ff6334-72a0-4748-bba6-0efb493c8033-kube-api-access-c5zwb\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469927 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gtwpm\" (UniqueName: \"kubernetes.io/projected/1bac1dc0-d552-4864-b805-fc92981ae4c0-kube-api-access-gtwpm\") pod \"node-resolver-2mlzr\" (UID: \"1bac1dc0-d552-4864-b805-fc92981ae4c0\") " pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469944 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-multus-certs\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469946 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-system-cni-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.469969 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-os-release\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470004 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75ff6334-72a0-4748-bba6-0efb493c8033-proxy-tls\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470252 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-os-release\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470282 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-kubelet\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470302 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-etc-kubernetes\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470325 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-cnibin\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470339 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-k8s-cni-cncf-io\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470596 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dcd71c7c-942c-4c29-969e-45d946f356c8-cni-binary-copy\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470605 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-hostroot\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470619 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-socket-dir-parent\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470663 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-daemon-config\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470759 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-socket-dir-parent\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470789 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/75ff6334-72a0-4748-bba6-0efb493c8033-mcd-auth-proxy-config\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470829 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sfdn\" (UniqueName: \"kubernetes.io/projected/dcd71c7c-942c-4c29-969e-45d946f356c8-kube-api-access-8sfdn\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470915 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-cni-bin\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470935 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-cni-multus\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470953 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-conf-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470988 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-multus-certs\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470991 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1bac1dc0-d552-4864-b805-fc92981ae4c0-hosts-file\") pod \"node-resolver-2mlzr\" (UID: \"1bac1dc0-d552-4864-b805-fc92981ae4c0\") " pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471018 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-cni-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471036 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75ff6334-72a0-4748-bba6-0efb493c8033-rootfs\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471033 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1bac1dc0-d552-4864-b805-fc92981ae4c0-hosts-file\") pod \"node-resolver-2mlzr\" (UID: \"1bac1dc0-d552-4864-b805-fc92981ae4c0\") " pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471054 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-netns\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471078 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-cni-multus\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471106 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-conf-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.470970 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-var-lib-cni-bin\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471158 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/75ff6334-72a0-4748-bba6-0efb493c8033-rootfs\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471180 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-host-run-netns\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471190 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-cni-dir\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471265 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/dcd71c7c-942c-4c29-969e-45d946f356c8-cni-binary-copy\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.471421 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/dcd71c7c-942c-4c29-969e-45d946f356c8-multus-daemon-config\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.488480 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/75ff6334-72a0-4748-bba6-0efb493c8033-proxy-tls\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.505124 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.531776 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gtwpm\" (UniqueName: \"kubernetes.io/projected/1bac1dc0-d552-4864-b805-fc92981ae4c0-kube-api-access-gtwpm\") pod \"node-resolver-2mlzr\" (UID: \"1bac1dc0-d552-4864-b805-fc92981ae4c0\") " pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.532119 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5zwb\" (UniqueName: \"kubernetes.io/projected/75ff6334-72a0-4748-bba6-0efb493c8033-kube-api-access-c5zwb\") pod \"machine-config-daemon-dwnd7\" (UID: \"75ff6334-72a0-4748-bba6-0efb493c8033\") " pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.536983 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.537948 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sfdn\" (UniqueName: \"kubernetes.io/projected/dcd71c7c-942c-4c29-969e-45d946f356c8-kube-api-access-8sfdn\") pod \"multus-9vnxv\" (UID: \"dcd71c7c-942c-4c29-969e-45d946f356c8\") " pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.550764 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.562843 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.578102 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.588182 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.598449 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.613341 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.624777 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.652156 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.673502 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-2mlzr" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.679633 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-9vnxv" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.696057 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-69v8h"] Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.702372 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.704362 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-228xs"] Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.705385 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.706866 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.708955 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.710011 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.710026 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.710821 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.711167 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.711256 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.711440 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.711905 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.732661 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.742160 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 06:28:46.494091301 +0000 UTC Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.742253 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.752551 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: W0130 16:54:44.762234 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1bac1dc0_d552_4864_b805_fc92981ae4c0.slice/crio-3315b4e0258d9e12e1ceae806c0209e401460f81c7ab011003c9543a0cd1f681 WatchSource:0}: Error finding container 3315b4e0258d9e12e1ceae806c0209e401460f81c7ab011003c9543a0cd1f681: Status 404 returned error can't find the container with id 3315b4e0258d9e12e1ceae806c0209e401460f81c7ab011003c9543a0cd1f681 Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773243 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-script-lib\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773283 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-tuning-conf-dir\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773312 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773334 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-node-log\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773366 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-ovn-kubernetes\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773387 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsrqg\" (UniqueName: \"kubernetes.io/projected/4777970a-81c4-4412-a06b-641a8343a749-kube-api-access-wsrqg\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773407 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-kubelet\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773446 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773527 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-systemd\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773563 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-var-lib-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773579 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-log-socket\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773593 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxzgm\" (UniqueName: \"kubernetes.io/projected/93651476-fd00-4a9e-934a-73537f1d103e-kube-api-access-rxzgm\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773620 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-cnibin\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773634 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4777970a-81c4-4412-a06b-641a8343a749-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773648 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-bin\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773671 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-env-overrides\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773694 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-etc-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773709 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-ovn\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773742 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4777970a-81c4-4412-a06b-641a8343a749-cni-binary-copy\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773758 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93651476-fd00-4a9e-934a-73537f1d103e-ovn-node-metrics-cert\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773775 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-netd\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773815 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-slash\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773834 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-systemd-units\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773848 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-os-release\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773868 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-netns\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773882 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-config\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.773905 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-system-cni-dir\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.776308 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.790433 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.800262 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.800350 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.800565 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.800618 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.800657 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.800696 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.802693 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.816321 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.831159 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.843427 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.855451 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-30 16:49:43 +0000 UTC, rotation deadline is 2026-12-13 08:07:59.967373677 +0000 UTC Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.855504 4712 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7599h13m15.111871951s for next certificate rotation Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.857149 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.869895 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875261 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-script-lib\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875297 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-tuning-conf-dir\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875317 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875337 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-node-log\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875355 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-ovn-kubernetes\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875373 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsrqg\" (UniqueName: \"kubernetes.io/projected/4777970a-81c4-4412-a06b-641a8343a749-kube-api-access-wsrqg\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875395 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-kubelet\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875428 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-tuning-conf-dir\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875735 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-ovn-kubernetes\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875751 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-node-log\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875787 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.875936 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-kubelet\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876073 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876105 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-systemd\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876132 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-var-lib-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876138 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-script-lib\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876153 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-log-socket\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876178 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876178 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxzgm\" (UniqueName: \"kubernetes.io/projected/93651476-fd00-4a9e-934a-73537f1d103e-kube-api-access-rxzgm\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876202 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-systemd\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876222 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-cnibin\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876248 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-cnibin\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876259 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4777970a-81c4-4412-a06b-641a8343a749-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876281 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-var-lib-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876283 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-bin\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876300 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-bin\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876327 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-env-overrides\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876378 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-etc-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876399 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-ovn\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876431 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4777970a-81c4-4412-a06b-641a8343a749-cni-binary-copy\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876435 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-log-socket\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876454 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93651476-fd00-4a9e-934a-73537f1d103e-ovn-node-metrics-cert\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876473 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-netd\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876496 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-slash\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876518 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-systemd-units\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876537 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-os-release\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876558 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-netns\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876577 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-config\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876596 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-system-cni-dir\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876640 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-system-cni-dir\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876670 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-ovn\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876936 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4777970a-81c4-4412-a06b-641a8343a749-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876950 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-env-overrides\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876991 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-slash\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.877045 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4777970a-81c4-4412-a06b-641a8343a749-os-release\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.877076 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-systemd-units\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.877141 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-netns\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.877245 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4777970a-81c4-4412-a06b-641a8343a749-cni-binary-copy\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.877298 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-netd\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.876475 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-etc-openvswitch\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.877606 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-config\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.880856 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93651476-fd00-4a9e-934a-73537f1d103e-ovn-node-metrics-cert\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.885085 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.896555 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxzgm\" (UniqueName: \"kubernetes.io/projected/93651476-fd00-4a9e-934a-73537f1d103e-kube-api-access-rxzgm\") pod \"ovnkube-node-228xs\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.901037 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsrqg\" (UniqueName: \"kubernetes.io/projected/4777970a-81c4-4412-a06b-641a8343a749-kube-api-access-wsrqg\") pod \"multus-additional-cni-plugins-69v8h\" (UID: \"4777970a-81c4-4412-a06b-641a8343a749\") " pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.902104 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.911554 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.927605 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.933398 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.933445 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"157294ead37ee323b0edf012a90f98663809b71f2e35ec832238405604f4a109"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.936853 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerStarted","Data":"93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.936898 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerStarted","Data":"4f0ff1b0d86c227028d0482cda73120e611faf96c2c29e22d9bb3ab7b89988f1"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.940575 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.941554 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.943622 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.949221 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e" exitCode=255 Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.949298 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.949347 4712 scope.go:117] "RemoveContainer" containerID="b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.954087 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.954483 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.954526 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.960712 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.969042 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.975584 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2mlzr" event={"ID":"1bac1dc0-d552-4864-b805-fc92981ae4c0","Type":"ContainerStarted","Data":"3315b4e0258d9e12e1ceae806c0209e401460f81c7ab011003c9543a0cd1f681"} Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.978094 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:44 crc kubenswrapper[4712]: E0130 16:54:44.978356 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:54:46.978311186 +0000 UTC m=+23.885320645 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.988094 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 30 16:54:44 crc kubenswrapper[4712]: I0130 16:54:44.999402 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.001220 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.002479 4712 scope.go:117] "RemoveContainer" containerID="e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e" Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.002784 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.019694 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-69v8h" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.023266 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.025439 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.055073 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: W0130 16:54:45.069067 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93651476_fd00_4a9e_934a_73537f1d103e.slice/crio-da7cda9c930e78f721bfcb83b8fcf25c1e8d9e6c5a59141c005af665adcf7f87 WatchSource:0}: Error finding container da7cda9c930e78f721bfcb83b8fcf25c1e8d9e6c5a59141c005af665adcf7f87: Status 404 returned error can't find the container with id da7cda9c930e78f721bfcb83b8fcf25c1e8d9e6c5a59141c005af665adcf7f87 Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.073772 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.080502 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.080568 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.080595 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.080634 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.081723 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.081771 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:47.081755188 +0000 UTC m=+23.988764657 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.081898 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.081941 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:47.081929282 +0000 UTC m=+23.988938761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082004 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082019 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082049 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082082 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:47.082072165 +0000 UTC m=+23.989081634 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082004 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082103 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082112 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.082156 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:47.082147367 +0000 UTC m=+23.989156836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.095117 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.140011 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:37Z\\\",\\\"message\\\":\\\"W0130 16:54:26.960178 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:54:26.960483 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769792066 cert, and key in /tmp/serving-cert-2416737794/serving-signer.crt, /tmp/serving-cert-2416737794/serving-signer.key\\\\nI0130 16:54:27.348625 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:54:27.351367 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:54:27.351601 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:54:27.352575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416737794/tls.crt::/tmp/serving-cert-2416737794/tls.key\\\\\\\"\\\\nF0130 16:54:37.661480 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.161484 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.175476 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.183867 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.207783 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.217862 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.231703 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.242101 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.251300 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.267636 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.285722 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.297897 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.312648 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.595581 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.607616 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.608441 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.610502 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.622189 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.633893 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.653464 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.665466 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.679488 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.690576 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.723646 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.742618 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 06:17:47.773159721 +0000 UTC Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.767382 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.791214 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.821339 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.839956 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.860277 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:37Z\\\",\\\"message\\\":\\\"W0130 16:54:26.960178 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:54:26.960483 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769792066 cert, and key in /tmp/serving-cert-2416737794/serving-signer.crt, /tmp/serving-cert-2416737794/serving-signer.key\\\\nI0130 16:54:27.348625 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:54:27.351367 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:54:27.351601 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:54:27.352575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416737794/tls.crt::/tmp/serving-cert-2416737794/tls.key\\\\\\\"\\\\nF0130 16:54:37.661480 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.876730 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.894460 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.906778 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.917290 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.927577 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.947433 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.958495 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.969525 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.979520 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c"} Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.981146 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc"} Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.982587 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" exitCode=0 Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.982649 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.982676 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"da7cda9c930e78f721bfcb83b8fcf25c1e8d9e6c5a59141c005af665adcf7f87"} Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.985366 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.993928 4712 scope.go:117] "RemoveContainer" containerID="e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e" Jan 30 16:54:45 crc kubenswrapper[4712]: E0130 16:54:45.994077 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.994568 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-2mlzr" event={"ID":"1bac1dc0-d552-4864-b805-fc92981ae4c0","Type":"ContainerStarted","Data":"7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee"} Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.998898 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:45Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:45 crc kubenswrapper[4712]: I0130 16:54:45.999640 4712 generic.go:334] "Generic (PLEG): container finished" podID="4777970a-81c4-4412-a06b-641a8343a749" containerID="359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2" exitCode=0 Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.000110 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerDied","Data":"359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2"} Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.000138 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerStarted","Data":"4034e132a7801190db459299c0639e8717f007e42de8d8124f45cef77365dd06"} Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.019164 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b94a68e91d2a8a55d6cb57a915466f47075d4b4fdfccea522d07b9c3dd2f5882\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:37Z\\\",\\\"message\\\":\\\"W0130 16:54:26.960178 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0130 16:54:26.960483 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769792066 cert, and key in /tmp/serving-cert-2416737794/serving-signer.crt, /tmp/serving-cert-2416737794/serving-signer.key\\\\nI0130 16:54:27.348625 1 observer_polling.go:159] Starting file observer\\\\nW0130 16:54:27.351367 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0130 16:54:27.351601 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0130 16:54:27.352575 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2416737794/tls.crt::/tmp/serving-cert-2416737794/tls.key\\\\\\\"\\\\nF0130 16:54:37.661480 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.033219 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.050024 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.065365 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.082238 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.113833 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.154766 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.192574 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.230881 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.282156 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.313018 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.355988 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.411946 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.448734 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.472775 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.518718 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.551387 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.592701 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.630789 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:46Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.743101 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 23:37:39.780634528 +0000 UTC Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.798730 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.798750 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.798790 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:46 crc kubenswrapper[4712]: E0130 16:54:46.798851 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:46 crc kubenswrapper[4712]: E0130 16:54:46.798980 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:46 crc kubenswrapper[4712]: E0130 16:54:46.799070 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:46 crc kubenswrapper[4712]: I0130 16:54:46.998420 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:46 crc kubenswrapper[4712]: E0130 16:54:46.998510 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:54:50.998492209 +0000 UTC m=+27.905501678 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.004774 4712 generic.go:334] "Generic (PLEG): container finished" podID="4777970a-81c4-4412-a06b-641a8343a749" containerID="27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50" exitCode=0 Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.004827 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerDied","Data":"27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50"} Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.009915 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.009972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.009986 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.009995 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.010004 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.019584 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.043288 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.059410 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.073761 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.089373 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.100754 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.100812 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.100839 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.100946 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.101638 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.101673 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:51.101661784 +0000 UTC m=+28.008671253 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102092 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102103 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102112 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102134 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:51.102126545 +0000 UTC m=+28.009136014 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102170 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102192 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102198 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102217 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:51.102211587 +0000 UTC m=+28.009221056 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102243 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: E0130 16:54:47.102294 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:51.102256318 +0000 UTC m=+28.009265787 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.111597 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.124852 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.137753 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.157410 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.172581 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.186714 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.210912 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.222918 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.240281 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.354375 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-k255f"] Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.355333 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.357360 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.359157 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.360386 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.362179 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.388167 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.403461 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.404240 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2482315f-1b5d-4a27-a9d9-97f4780c1869-host\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.404447 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh9k5\" (UniqueName: \"kubernetes.io/projected/2482315f-1b5d-4a27-a9d9-97f4780c1869-kube-api-access-rh9k5\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.404575 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2482315f-1b5d-4a27-a9d9-97f4780c1869-serviceca\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.416179 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.438472 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.471641 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.505921 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh9k5\" (UniqueName: \"kubernetes.io/projected/2482315f-1b5d-4a27-a9d9-97f4780c1869-kube-api-access-rh9k5\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.505976 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2482315f-1b5d-4a27-a9d9-97f4780c1869-serviceca\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.506036 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2482315f-1b5d-4a27-a9d9-97f4780c1869-host\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.506114 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2482315f-1b5d-4a27-a9d9-97f4780c1869-host\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.507509 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2482315f-1b5d-4a27-a9d9-97f4780c1869-serviceca\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.510982 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.539978 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh9k5\" (UniqueName: \"kubernetes.io/projected/2482315f-1b5d-4a27-a9d9-97f4780c1869-kube-api-access-rh9k5\") pod \"node-ca-k255f\" (UID: \"2482315f-1b5d-4a27-a9d9-97f4780c1869\") " pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.569504 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.610964 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.648535 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.666010 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-k255f" Jan 30 16:54:47 crc kubenswrapper[4712]: W0130 16:54:47.679317 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2482315f_1b5d_4a27_a9d9_97f4780c1869.slice/crio-98de488b6a033a7e0ce6ce90892eb6271bee6aa98b633f07e9b28d39efb968b9 WatchSource:0}: Error finding container 98de488b6a033a7e0ce6ce90892eb6271bee6aa98b633f07e9b28d39efb968b9: Status 404 returned error can't find the container with id 98de488b6a033a7e0ce6ce90892eb6271bee6aa98b633f07e9b28d39efb968b9 Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.698119 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.739125 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.743277 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 08:51:22.878368519 +0000 UTC Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.775035 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.813102 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.850027 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:47 crc kubenswrapper[4712]: I0130 16:54:47.902521 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:47Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.015884 4712 generic.go:334] "Generic (PLEG): container finished" podID="4777970a-81c4-4412-a06b-641a8343a749" containerID="1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72" exitCode=0 Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.015969 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerDied","Data":"1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72"} Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.018158 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k255f" event={"ID":"2482315f-1b5d-4a27-a9d9-97f4780c1869","Type":"ContainerStarted","Data":"6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7"} Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.018386 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-k255f" event={"ID":"2482315f-1b5d-4a27-a9d9-97f4780c1869","Type":"ContainerStarted","Data":"98de488b6a033a7e0ce6ce90892eb6271bee6aa98b633f07e9b28d39efb968b9"} Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.023000 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.034733 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.049257 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.065867 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.083925 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.098465 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.131002 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.175008 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.211163 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.254256 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.292378 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.330991 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.371477 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.412911 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.449073 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.493475 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.530165 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.570967 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.609780 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.653362 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.700020 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.739416 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.743665 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 19:47:15.374809772 +0000 UTC Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.774666 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.799433 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.799505 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:48 crc kubenswrapper[4712]: E0130 16:54:48.799586 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:48 crc kubenswrapper[4712]: E0130 16:54:48.799643 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.799900 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:48 crc kubenswrapper[4712]: E0130 16:54:48.800098 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.811173 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.852949 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.890614 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.930858 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:48 crc kubenswrapper[4712]: I0130 16:54:48.983040 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:48Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.011811 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.028837 4712 generic.go:334] "Generic (PLEG): container finished" podID="4777970a-81c4-4412-a06b-641a8343a749" containerID="65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d" exitCode=0 Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.028888 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerDied","Data":"65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d"} Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.057615 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.096835 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.131899 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.169895 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.217850 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.260352 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.299480 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.332437 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.372743 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.411344 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.458783 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.492536 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.530743 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.584946 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.612486 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.651228 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.692420 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.744222 4712 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.744268 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 13:08:39.506666926 +0000 UTC Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.745666 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.745726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.745751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.745947 4712 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.751852 4712 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.752060 4712 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.752879 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.752913 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.752922 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.752935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.752944 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:49 crc kubenswrapper[4712]: E0130 16:54:49.767134 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.771383 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.771435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.771451 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.771468 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.771495 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:49 crc kubenswrapper[4712]: E0130 16:54:49.786898 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.792343 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.792380 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.792390 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.792405 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.792414 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:49 crc kubenswrapper[4712]: E0130 16:54:49.810430 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.814888 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.814919 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.814927 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.814941 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.814949 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:49 crc kubenswrapper[4712]: E0130 16:54:49.852236 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.856446 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.856665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.856757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.856871 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.856982 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:49 crc kubenswrapper[4712]: E0130 16:54:49.870851 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:49Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:49 crc kubenswrapper[4712]: E0130 16:54:49.871050 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.873039 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.873087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.873104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.873120 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.873133 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.975960 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.976227 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.976493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.976751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:49 crc kubenswrapper[4712]: I0130 16:54:49.977001 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:49Z","lastTransitionTime":"2026-01-30T16:54:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.035208 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.037879 4712 generic.go:334] "Generic (PLEG): container finished" podID="4777970a-81c4-4412-a06b-641a8343a749" containerID="291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc" exitCode=0 Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.037905 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerDied","Data":"291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.053430 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.072864 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.080316 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.080565 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.080584 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.080600 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.080611 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.085763 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.101100 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.113369 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.126843 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.140964 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.160471 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.182972 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.183007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.183018 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.183032 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.183041 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.198745 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.210725 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.229745 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.257019 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.269832 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.284832 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.284870 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.284902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.284919 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.284929 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.295983 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.336511 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:50Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.387617 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.387656 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.387664 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.387678 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.387688 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.489679 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.489721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.489730 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.489745 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.489753 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.592447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.592487 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.592497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.592513 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.592524 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.694786 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.694863 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.694884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.694908 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.694925 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.744943 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 00:59:34.633502091 +0000 UTC Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.796942 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.797003 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.797020 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.797044 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.797065 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.799330 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.799408 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.799336 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:50 crc kubenswrapper[4712]: E0130 16:54:50.799545 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:50 crc kubenswrapper[4712]: E0130 16:54:50.799665 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:50 crc kubenswrapper[4712]: E0130 16:54:50.799791 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.900047 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.900114 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.900133 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.900154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:50 crc kubenswrapper[4712]: I0130 16:54:50.900168 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:50Z","lastTransitionTime":"2026-01-30T16:54:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.005735 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.005781 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.006052 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.006094 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.006107 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.051405 4712 generic.go:334] "Generic (PLEG): container finished" podID="4777970a-81c4-4412-a06b-641a8343a749" containerID="f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726" exitCode=0 Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.051484 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerDied","Data":"f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.066260 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.066715 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:54:59.066645168 +0000 UTC m=+35.973654697 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.082087 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.106634 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.108129 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.108177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.108193 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.108214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.108231 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.126497 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.140541 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.151399 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.167713 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.167777 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.167820 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.167870 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.168465 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.168526 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:59.168509891 +0000 UTC m=+36.075519380 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169257 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169286 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169300 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169306 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169338 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169353 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169587 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169340 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:59.169328061 +0000 UTC m=+36.076337550 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169682 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:59.169660939 +0000 UTC m=+36.076670408 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:51 crc kubenswrapper[4712]: E0130 16:54:51.169700 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:54:59.16969122 +0000 UTC m=+36.076700769 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.172678 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.185456 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.196383 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.211059 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.211087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.211097 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.211111 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.211122 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.213887 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.228944 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.240244 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.250460 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.263175 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.276057 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.286574 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.314125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.314154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.314162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.314176 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.314184 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.416298 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.416323 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.416331 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.416343 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.416352 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.519170 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.519209 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.519219 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.519235 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.519245 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.621777 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.621828 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.621839 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.621856 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.621868 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.723947 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.723975 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.723984 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.723997 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.724006 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.745740 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 18:20:06.127109992 +0000 UTC Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.825668 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.825700 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.825711 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.825726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.825738 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.927778 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.927821 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.927831 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.927846 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:51 crc kubenswrapper[4712]: I0130 16:54:51.927855 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:51Z","lastTransitionTime":"2026-01-30T16:54:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.030605 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.030661 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.030679 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.030701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.030717 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.049049 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.050143 4712 scope.go:117] "RemoveContainer" containerID="e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e" Jan 30 16:54:52 crc kubenswrapper[4712]: E0130 16:54:52.050528 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.059080 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.059575 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.059629 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.067467 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" event={"ID":"4777970a-81c4-4412-a06b-641a8343a749","Type":"ContainerStarted","Data":"7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.074852 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.096428 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.107879 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.124770 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.165392 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.167175 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.167208 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.167216 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.167229 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.167240 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.170468 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.181124 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.194351 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.207615 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.227222 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.247885 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.262405 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.270127 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.270209 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.270227 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.270253 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.270323 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.277475 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.292410 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.306430 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.316631 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.330597 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.344846 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.357214 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.367122 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.372447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.372603 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.372663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.372725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.372779 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.383172 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.398453 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.410534 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.421886 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.445989 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.459702 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.471544 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.475452 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.475486 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.475497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.475514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.475527 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.483830 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.496021 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.505658 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.516878 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:52Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.577944 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.578002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.578028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.578073 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.578086 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.681276 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.681339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.681352 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.681369 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.681382 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.746165 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:22:44.272106261 +0000 UTC Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.783432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.783546 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.783574 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.783604 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.783626 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.798866 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.798992 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.798924 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:52 crc kubenswrapper[4712]: E0130 16:54:52.799233 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:52 crc kubenswrapper[4712]: E0130 16:54:52.799939 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:52 crc kubenswrapper[4712]: E0130 16:54:52.800011 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.886344 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.886375 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.886386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.886403 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.886438 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.988451 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.988527 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.988545 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.988573 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:52 crc kubenswrapper[4712]: I0130 16:54:52.988592 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:52Z","lastTransitionTime":"2026-01-30T16:54:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.070987 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.091058 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.091104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.091119 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.091138 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.091150 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.092951 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.109230 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.126417 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.142505 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.155960 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.170342 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.189762 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.193292 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.193357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.193371 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.193387 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.193399 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.204370 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.216888 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.235102 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.252346 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.266214 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.277250 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.295686 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.295722 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.295730 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.295762 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.295772 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.300659 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.313846 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.326467 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.398959 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.399012 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.399022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.399037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.399046 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.501355 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.501387 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.501396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.501412 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.501423 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.598214 4712 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.604864 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.604898 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.604910 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.604925 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.604937 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.707297 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.707345 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.707357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.707373 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.707384 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.747157 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:59:21.975000049 +0000 UTC Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.809582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.809622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.809633 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.809648 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.809658 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.812929 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.823270 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.840184 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.856627 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.870245 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.883293 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.898971 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.908853 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.913060 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.913102 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.913115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.913129 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.913139 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:53Z","lastTransitionTime":"2026-01-30T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.928585 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.939597 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.949206 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.965785 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.976672 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.986534 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:53 crc kubenswrapper[4712]: I0130 16:54:53.994670 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.015442 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.015499 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.015510 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.015525 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.015537 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.117996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.118037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.118049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.118068 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.118077 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.220215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.220310 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.220319 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.220331 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.220362 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.323076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.323112 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.323120 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.323133 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.323142 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.425923 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.425962 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.425975 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.425992 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.426003 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.528067 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.528101 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.528113 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.528127 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.528137 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.630085 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.630129 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.630137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.630152 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.630162 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.733103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.733135 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.733145 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.733160 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.733171 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.748244 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 01:04:22.176607609 +0000 UTC Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.798870 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:54 crc kubenswrapper[4712]: E0130 16:54:54.798990 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.799298 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:54 crc kubenswrapper[4712]: E0130 16:54:54.799354 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.799389 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:54 crc kubenswrapper[4712]: E0130 16:54:54.799425 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.835432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.835473 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.835482 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.835493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.835502 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.938626 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.938662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.938669 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.938682 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:54 crc kubenswrapper[4712]: I0130 16:54:54.938690 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:54Z","lastTransitionTime":"2026-01-30T16:54:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.041720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.041747 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.041757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.041769 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.041778 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.143455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.143481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.143490 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.143503 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.143511 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.246243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.246278 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.246289 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.246306 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.246317 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.348535 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.348558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.348566 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.348578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.348586 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.451683 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.451713 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.451721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.451737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.451757 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.554433 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.554483 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.554495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.554511 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.554522 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.656726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.656839 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.656867 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.656897 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.656921 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.748651 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 14:21:15.687215961 +0000 UTC Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.759282 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.759307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.759315 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.759327 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.759336 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.862353 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.862397 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.862416 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.862440 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.862457 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.969336 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.969389 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.969406 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.969429 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:55 crc kubenswrapper[4712]: I0130 16:54:55.969446 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:55Z","lastTransitionTime":"2026-01-30T16:54:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.071224 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.071260 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.071268 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.071281 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.071295 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.078581 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/0.log" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.081170 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11" exitCode=1 Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.081211 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.082752 4712 scope.go:117] "RemoveContainer" containerID="8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.095634 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.106251 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.123547 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.138746 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.152830 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.171482 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:55Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:54:55.499037 5946 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 16:54:55.499146 5946 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:54:55.499642 5946 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 16:54:55.499670 5946 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:54:55.499675 5946 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:54:55.499720 5946 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:54:55.499728 5946 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:54:55.499746 5946 factory.go:656] Stopping watch factory\\\\nI0130 16:54:55.499747 5946 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:54:55.499758 5946 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:54:55.499766 5946 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 16:54:55.499784 5946 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:54:55.499815 5946 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.173387 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.173434 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.173446 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.173465 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.173478 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.196058 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.209344 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.224732 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.247582 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.262306 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.272536 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.278016 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.278067 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.278075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.278090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.278099 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.289110 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.300969 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.311499 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.381329 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.381391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.381401 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.381413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.381422 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.483886 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.483937 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.483949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.483968 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.483980 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.585556 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.585756 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.585906 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.586037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.586131 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.688510 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.688554 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.688567 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.688581 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.688592 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.700198 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf"] Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.700868 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.704169 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.705063 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.723946 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:55Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:54:55.499037 5946 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 16:54:55.499146 5946 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:54:55.499642 5946 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 16:54:55.499670 5946 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:54:55.499675 5946 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:54:55.499720 5946 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:54:55.499728 5946 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:54:55.499746 5946 factory.go:656] Stopping watch factory\\\\nI0130 16:54:55.499747 5946 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:54:55.499758 5946 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:54:55.499766 5946 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 16:54:55.499784 5946 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:54:55.499815 5946 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.744198 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.750301 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 16:44:29.314475056 +0000 UTC Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.763259 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.778218 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.790404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.790450 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.790467 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.790490 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.790509 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.794978 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.798843 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.798869 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.798881 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:56 crc kubenswrapper[4712]: E0130 16:54:56.798960 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:56 crc kubenswrapper[4712]: E0130 16:54:56.799045 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:56 crc kubenswrapper[4712]: E0130 16:54:56.799106 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.811001 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.822338 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfg8q\" (UniqueName: \"kubernetes.io/projected/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-kube-api-access-vfg8q\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.822392 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.822413 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.822450 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.828033 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.842120 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.859814 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.872577 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.884029 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.892463 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.892499 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.892508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.892522 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.892531 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.897764 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.907486 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.921765 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.923027 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfg8q\" (UniqueName: \"kubernetes.io/projected/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-kube-api-access-vfg8q\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.923080 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.923108 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.923126 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.923723 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.923842 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.929259 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.937675 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfg8q\" (UniqueName: \"kubernetes.io/projected/ea67b02c-fc08-4a69-8c7f-c8da661a12ea-kube-api-access-vfg8q\") pod \"ovnkube-control-plane-749d76644c-4f9lf\" (UID: \"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.938018 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.950180 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:56Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.995103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.995284 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.995506 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.995667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:56 crc kubenswrapper[4712]: I0130 16:54:56.995845 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:56Z","lastTransitionTime":"2026-01-30T16:54:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.021642 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.086293 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/0.log" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.088676 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.089101 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.089553 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" event={"ID":"ea67b02c-fc08-4a69-8c7f-c8da661a12ea","Type":"ContainerStarted","Data":"7b31d965698d0b259232c022bc66051424439ad6f4af3d6350a55e2565fe4ec9"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.097632 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.097660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.097672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.097685 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.097695 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.108623 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.121740 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.144043 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:55Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:54:55.499037 5946 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 16:54:55.499146 5946 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:54:55.499642 5946 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 16:54:55.499670 5946 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:54:55.499675 5946 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:54:55.499720 5946 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:54:55.499728 5946 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:54:55.499746 5946 factory.go:656] Stopping watch factory\\\\nI0130 16:54:55.499747 5946 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:54:55.499758 5946 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:54:55.499766 5946 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 16:54:55.499784 5946 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:54:55.499815 5946 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.171964 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.190410 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.199823 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.199856 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.199867 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.199881 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.199891 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.202432 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.212633 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.228019 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.239034 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.252317 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.271614 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.301949 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.303535 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.303556 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.303563 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.303577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.303585 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.316157 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.328812 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.339424 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.353689 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.406390 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.406414 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.406422 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.406435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.406443 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.508714 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.508755 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.508769 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.508788 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.508835 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.611019 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.611055 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.611063 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.611076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.611084 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.713276 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.713308 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.713350 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.713364 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.713372 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.751254 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:34:12.737004999 +0000 UTC Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.815201 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.815223 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.815230 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.815241 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.815251 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.917646 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.917709 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.917719 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.917733 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:57 crc kubenswrapper[4712]: I0130 16:54:57.917743 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:57Z","lastTransitionTime":"2026-01-30T16:54:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.020245 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.020555 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.020642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.020737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.020941 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.095612 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/1.log" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.096406 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/0.log" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.100963 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22" exitCode=1 Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.101101 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.101178 4712 scope.go:117] "RemoveContainer" containerID="8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.102118 4712 scope.go:117] "RemoveContainer" containerID="7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22" Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.102403 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.104127 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" event={"ID":"ea67b02c-fc08-4a69-8c7f-c8da661a12ea","Type":"ContainerStarted","Data":"529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.104174 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" event={"ID":"ea67b02c-fc08-4a69-8c7f-c8da661a12ea","Type":"ContainerStarted","Data":"dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.123980 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.124064 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.124081 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.124101 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.124118 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.126894 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.143295 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.158491 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.170859 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.187564 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.205367 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.221133 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.226729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.226930 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.227033 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.227129 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.227244 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.234333 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.259988 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:55Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:54:55.499037 5946 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 16:54:55.499146 5946 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:54:55.499642 5946 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 16:54:55.499670 5946 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:54:55.499675 5946 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:54:55.499720 5946 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:54:55.499728 5946 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:54:55.499746 5946 factory.go:656] Stopping watch factory\\\\nI0130 16:54:55.499747 5946 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:54:55.499758 5946 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:54:55.499766 5946 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 16:54:55.499784 5946 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:54:55.499815 5946 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.273214 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.283075 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.291538 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.301818 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.315173 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.325638 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.328867 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.328897 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.328909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.328927 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.328939 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.337366 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.349249 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.362029 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.374213 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.389609 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.407114 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.420884 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.431566 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.431741 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.431861 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.431968 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.432078 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.437882 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.454595 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.469861 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.486953 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.500313 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.518101 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.535564 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.536034 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.536178 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.536347 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.536483 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.541170 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.561168 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.572531 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-lpb6h"] Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.573487 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.573590 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.584441 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.608208 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:55Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:54:55.499037 5946 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 16:54:55.499146 5946 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:54:55.499642 5946 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 16:54:55.499670 5946 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:54:55.499675 5946 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:54:55.499720 5946 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:54:55.499728 5946 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:54:55.499746 5946 factory.go:656] Stopping watch factory\\\\nI0130 16:54:55.499747 5946 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:54:55.499758 5946 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:54:55.499766 5946 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 16:54:55.499784 5946 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:54:55.499815 5946 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.623580 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.639359 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.639412 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.639424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.639438 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.639447 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.641779 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.642127 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnj8x\" (UniqueName: \"kubernetes.io/projected/abacbc6e-6514-4db6-80b5-23570952c86f-kube-api-access-jnj8x\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.642202 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.676011 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.689031 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.707875 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.717204 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.726732 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.737591 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.741169 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.741295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.741357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.741530 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.741621 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.742625 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnj8x\" (UniqueName: \"kubernetes.io/projected/abacbc6e-6514-4db6-80b5-23570952c86f-kube-api-access-jnj8x\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.742698 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.742865 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.742928 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:54:59.242907659 +0000 UTC m=+36.149917148 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.751511 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:08:12.174166036 +0000 UTC Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.760204 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8d1a5914c6b0281db980a45b47361cfd019f308a1141efb1106a36fb0c1cba11\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:55Z\\\",\\\"message\\\":\\\"/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:54:55.499037 5946 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0130 16:54:55.499146 5946 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:54:55.499642 5946 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0130 16:54:55.499670 5946 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0130 16:54:55.499675 5946 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0130 16:54:55.499720 5946 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0130 16:54:55.499728 5946 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0130 16:54:55.499746 5946 factory.go:656] Stopping watch factory\\\\nI0130 16:54:55.499747 5946 handler.go:208] Removed *v1.Node event handler 2\\\\nI0130 16:54:55.499758 5946 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:54:55.499766 5946 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0130 16:54:55.499784 5946 handler.go:208] Removed *v1.Node event handler 7\\\\nI0130 16:54:55.499815 5946 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0130 16\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.764112 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnj8x\" (UniqueName: \"kubernetes.io/projected/abacbc6e-6514-4db6-80b5-23570952c86f-kube-api-access-jnj8x\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.779181 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.790991 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.799323 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.799844 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.799604 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.799543 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.800451 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:54:58 crc kubenswrapper[4712]: E0130 16:54:58.800825 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.801422 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.813011 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.826253 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.837655 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.843872 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.844048 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.844161 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.844275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.844365 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.852895 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.866268 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:58Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.946533 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.946576 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.946587 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.946604 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:58 crc kubenswrapper[4712]: I0130 16:54:58.946615 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:58Z","lastTransitionTime":"2026-01-30T16:54:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.049824 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.049873 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.049890 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.049911 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.049927 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.110822 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/1.log" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.115615 4712 scope.go:117] "RemoveContainer" containerID="7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22" Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.115769 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.137539 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.146549 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.146782 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:55:15.146753216 +0000 UTC m=+52.053762705 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.152661 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.152717 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.152731 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.152748 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.152763 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.152818 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.169014 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.182366 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.195077 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.207416 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.218593 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.233376 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.244099 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.247922 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.247975 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.248001 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.248018 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.248057 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248151 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248207 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:15.248190729 +0000 UTC m=+52.155200198 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248403 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248430 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248443 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248478 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:15.248468385 +0000 UTC m=+52.155477854 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248536 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248551 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248561 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248566 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248591 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:15.248580448 +0000 UTC m=+52.155590007 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248664 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248667 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:15.24865516 +0000 UTC m=+52.155664689 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: E0130 16:54:59.248720 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:55:00.248700041 +0000 UTC m=+37.155709510 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.256066 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.256420 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.256496 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.256759 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.256847 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.257849 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.267719 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.284683 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.304376 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.315665 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.327877 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.340976 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.357107 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:59Z is after 2025-08-24T17:21:41Z" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.359685 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.359711 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.359719 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.359732 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.359741 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.462413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.462446 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.462454 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.462467 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.462475 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.566028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.566509 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.566539 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.566719 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.566764 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.669897 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.669951 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.669969 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.669989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.670004 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.752036 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 11:59:17.681931861 +0000 UTC Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.772471 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.772519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.772530 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.772547 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.772560 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.875294 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.875355 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.875375 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.875399 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.875416 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.978174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.978283 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.978295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.978311 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:54:59 crc kubenswrapper[4712]: I0130 16:54:59.978324 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:54:59Z","lastTransitionTime":"2026-01-30T16:54:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.087440 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.087501 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.087519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.087545 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.087563 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.184192 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.184231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.184246 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.184264 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.184276 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.196953 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.201046 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.201152 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.201168 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.201186 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.201202 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.215864 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.220650 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.220692 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.220708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.220730 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.220747 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.234258 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.238467 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.238500 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.238511 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.238526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.238538 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.251452 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.254393 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.254432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.254442 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.254455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.254465 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.259890 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.260011 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.260046 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:55:02.260033408 +0000 UTC m=+39.167042877 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.264755 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:00Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:00Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.264893 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.266359 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.266396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.266404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.266420 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.266429 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.370144 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.370206 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.370243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.370262 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.370278 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.472996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.473043 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.473055 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.473071 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.473082 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.575681 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.575727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.575742 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.575760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.575773 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.678688 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.678734 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.678747 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.678765 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.678778 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.752345 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 04:28:42.611881903 +0000 UTC Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.781881 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.781923 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.781935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.781952 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.781964 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.798655 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.798790 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.798695 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.798671 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.798929 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.799018 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.799111 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:00 crc kubenswrapper[4712]: E0130 16:55:00.799200 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.885109 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.885176 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.885197 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.885225 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.885247 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.986947 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.986996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.987007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.987030 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:00 crc kubenswrapper[4712]: I0130 16:55:00.987041 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:00Z","lastTransitionTime":"2026-01-30T16:55:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.089736 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.089846 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.089866 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.089890 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.089907 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.191899 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.191970 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.191980 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.191995 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.192012 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.294564 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.294629 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.294652 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.294677 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.294693 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.397437 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.397489 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.397506 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.397529 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.397547 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.499831 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.499874 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.499888 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.499908 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.499923 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.602953 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.602985 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.602997 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.603013 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.603023 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.705347 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.705378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.705390 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.705405 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.705418 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.753155 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 16:59:26.988982485 +0000 UTC Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.807644 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.807681 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.807693 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.807707 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.807718 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.910323 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.910368 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.910388 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.910406 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:01 crc kubenswrapper[4712]: I0130 16:55:01.910418 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:01Z","lastTransitionTime":"2026-01-30T16:55:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.013326 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.013357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.013368 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.013382 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.013395 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.116015 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.116049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.116058 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.116071 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.116082 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.218127 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.218204 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.218221 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.218242 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.218258 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.280482 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:02 crc kubenswrapper[4712]: E0130 16:55:02.280620 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:02 crc kubenswrapper[4712]: E0130 16:55:02.280679 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:55:06.280662022 +0000 UTC m=+43.187671501 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.319993 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.320037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.320049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.320074 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.320085 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.422170 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.422210 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.422222 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.422240 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.422252 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.524399 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.524447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.524458 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.524474 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.524487 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.626891 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.626928 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.626939 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.626955 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.626967 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.729436 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.729493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.729508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.729529 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.729545 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.753877 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 08:26:05.462447207 +0000 UTC Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.799737 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.799874 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.799759 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.800449 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:02 crc kubenswrapper[4712]: E0130 16:55:02.800784 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:02 crc kubenswrapper[4712]: E0130 16:55:02.800918 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:02 crc kubenswrapper[4712]: E0130 16:55:02.800994 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:02 crc kubenswrapper[4712]: E0130 16:55:02.801126 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.802936 4712 scope.go:117] "RemoveContainer" containerID="e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.832562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.832650 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.832662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.832738 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.832752 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.934630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.934662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.934670 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.934686 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:02 crc kubenswrapper[4712]: I0130 16:55:02.934695 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:02Z","lastTransitionTime":"2026-01-30T16:55:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.037481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.037562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.037575 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.037600 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.037612 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.127761 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.131096 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.131546 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.139293 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.139318 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.139325 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.139338 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.139347 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.156449 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.167854 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.178263 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.194456 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.206010 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.215625 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.224532 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.234583 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.243337 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.243391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.243407 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.243425 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.243474 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.248986 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.260169 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.273570 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.288040 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.299243 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.310095 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.324664 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.341255 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.345032 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.345065 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.345075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.345089 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.345100 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.352929 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.447760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.447832 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.447843 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.447859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.447870 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.550084 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.550109 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.550118 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.550130 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.550138 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.652287 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.652314 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.652325 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.652338 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.652381 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.754045 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 10:28:13.427408224 +0000 UTC Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.755345 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.755374 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.755383 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.755396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.755406 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.813526 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.836444 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.857147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.857177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.857188 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.857203 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.857214 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.859764 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.873080 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.890950 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.902768 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.915945 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.926966 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.937872 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.949251 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.958710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.958748 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.958756 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.958774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.958783 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:03Z","lastTransitionTime":"2026-01-30T16:55:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.964541 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.977000 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.989651 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:03 crc kubenswrapper[4712]: I0130 16:55:03.998367 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.012578 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.022247 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.033689 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.065366 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.065400 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.065411 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.065425 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.065436 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.167568 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.167603 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.167617 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.167630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.167640 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.270260 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.270307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.270320 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.270338 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.270351 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.372495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.372536 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.372547 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.372561 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.372574 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.474290 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.474328 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.474340 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.474356 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.474366 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.576615 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.576646 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.576654 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.576665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.576674 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.678601 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.678647 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.678657 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.678671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.678683 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.754551 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:22:27.736698219 +0000 UTC Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.780604 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.780651 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.780663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.780684 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.780698 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.799125 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.799178 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:04 crc kubenswrapper[4712]: E0130 16:55:04.799245 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:04 crc kubenswrapper[4712]: E0130 16:55:04.799333 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.799435 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.799638 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:04 crc kubenswrapper[4712]: E0130 16:55:04.799885 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:04 crc kubenswrapper[4712]: E0130 16:55:04.799911 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.910334 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.910431 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.910443 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.910466 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:04 crc kubenswrapper[4712]: I0130 16:55:04.910482 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:04Z","lastTransitionTime":"2026-01-30T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.013449 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.013492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.013502 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.013517 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.013528 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.116573 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.116638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.116661 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.116688 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.116712 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.219231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.219267 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.219280 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.219296 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.219306 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.321275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.321318 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.321326 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.321339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.321348 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.423699 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.423742 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.423750 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.423764 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.423773 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.526061 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.526116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.526133 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.526154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.526172 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.628726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.628751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.628759 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.628771 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.628779 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.730977 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.731028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.731047 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.731071 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.731098 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.754834 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:42:43.122144822 +0000 UTC Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.834303 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.834348 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.834360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.834378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.834392 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.937602 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.937648 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.937660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.937675 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:05 crc kubenswrapper[4712]: I0130 16:55:05.937688 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:05Z","lastTransitionTime":"2026-01-30T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.040887 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.040935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.040949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.040971 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.040986 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.143457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.143520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.143547 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.143578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.143601 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.246136 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.246342 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.246374 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.246405 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.246427 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.326371 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:06 crc kubenswrapper[4712]: E0130 16:55:06.326546 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:06 crc kubenswrapper[4712]: E0130 16:55:06.326613 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:55:14.326592552 +0000 UTC m=+51.233602051 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.348996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.349038 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.349047 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.349062 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.349071 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.451891 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.451937 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.451947 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.451967 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.451982 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.554578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.554627 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.554638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.554662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.554675 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.657450 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.657502 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.657517 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.657537 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.657551 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.755509 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 07:46:35.782983862 +0000 UTC Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.760743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.760785 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.760811 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.760826 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.760836 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.799083 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.799082 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.799132 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.799278 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:06 crc kubenswrapper[4712]: E0130 16:55:06.799388 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:06 crc kubenswrapper[4712]: E0130 16:55:06.799301 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:06 crc kubenswrapper[4712]: E0130 16:55:06.799511 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:06 crc kubenswrapper[4712]: E0130 16:55:06.799751 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.863782 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.863878 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.863890 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.863907 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.863918 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.966378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.966453 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.966469 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.966489 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:06 crc kubenswrapper[4712]: I0130 16:55:06.966503 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:06Z","lastTransitionTime":"2026-01-30T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.069011 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.069064 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.069074 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.069087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.069095 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.171277 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.171339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.171356 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.171380 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.171399 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.274332 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.274436 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.274457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.274519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.274538 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.378214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.378290 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.378316 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.378346 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.378370 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.481735 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.482097 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.482113 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.482130 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.482141 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.584413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.584448 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.584457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.584471 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.584481 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.686579 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.686637 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.686649 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.686667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.686680 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.755597 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 16:41:20.467076094 +0000 UTC Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.790080 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.790131 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.790146 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.790163 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.790173 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.895594 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.895646 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.895667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.895689 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.895705 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.997939 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.997974 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.997983 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.997995 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:07 crc kubenswrapper[4712]: I0130 16:55:07.998004 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:07Z","lastTransitionTime":"2026-01-30T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.100754 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.100808 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.100817 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.100831 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.100840 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.204988 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.205043 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.205060 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.205086 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.205102 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.307428 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.307470 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.307484 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.307500 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.307511 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.410671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.410719 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.410734 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.410757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.410773 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.514123 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.514213 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.514247 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.514600 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.514660 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.617734 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.618064 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.618105 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.618134 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.618154 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.719851 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.719886 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.719897 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.719913 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.719927 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.755983 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:06:41.457556703 +0000 UTC Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.799464 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.799494 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.799539 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.799550 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:08 crc kubenswrapper[4712]: E0130 16:55:08.799622 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:08 crc kubenswrapper[4712]: E0130 16:55:08.799740 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:08 crc kubenswrapper[4712]: E0130 16:55:08.799879 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:08 crc kubenswrapper[4712]: E0130 16:55:08.799963 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.822215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.822254 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.822268 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.822282 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.822292 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.925080 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.925136 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.925152 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.925175 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:08 crc kubenswrapper[4712]: I0130 16:55:08.925194 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:08Z","lastTransitionTime":"2026-01-30T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.027663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.027696 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.027706 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.027721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.027731 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.130284 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.130341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.130357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.130379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.130395 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.233551 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.233619 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.233648 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.233678 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.233701 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.335879 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.335923 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.335932 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.335946 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.335955 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.438370 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.438577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.438676 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.438757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.438942 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.541154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.541181 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.541191 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.541203 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.541211 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.648555 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.648617 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.648638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.648658 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.648670 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.752625 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.752683 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.752701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.752723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.752742 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.757096 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 23:59:43.914041007 +0000 UTC Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.855449 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.855511 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.855533 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.855562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.855584 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.959061 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.959108 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.959125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.959147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:09 crc kubenswrapper[4712]: I0130 16:55:09.959164 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:09Z","lastTransitionTime":"2026-01-30T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.061984 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.062049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.062063 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.062082 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.062093 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.164320 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.164392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.164412 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.164435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.164452 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.267386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.267457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.267481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.267531 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.267556 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.370028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.370076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.370087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.370106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.370118 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.444272 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.444341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.444365 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.444395 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.444419 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.463823 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.467212 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.467260 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.467275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.467296 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.467309 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.478469 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.481789 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.481848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.481863 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.481880 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.481896 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.493011 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.496266 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.496381 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.496441 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.496522 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.496580 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.508605 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.511568 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.511683 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.511756 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.511895 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.511958 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.522941 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:10Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:10Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.523280 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.524769 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.524832 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.524847 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.524864 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.524875 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.627353 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.627386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.627395 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.627408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.627417 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.729656 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.729698 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.729708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.729722 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.729735 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.758265 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 07:33:02.41530308 +0000 UTC Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.798859 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.798881 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.798989 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.799148 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.799210 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.799308 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.799981 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:10 crc kubenswrapper[4712]: E0130 16:55:10.800069 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.800443 4712 scope.go:117] "RemoveContainer" containerID="7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.833218 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.833287 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.833310 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.833337 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.833357 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.935684 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.935717 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.935726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.935739 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:10 crc kubenswrapper[4712]: I0130 16:55:10.935749 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:10Z","lastTransitionTime":"2026-01-30T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.037628 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.037670 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.037681 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.037693 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.037702 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.154336 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.154386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.154403 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.154428 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.154446 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.161279 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/1.log" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.164731 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.176929 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.194023 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.219443 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.236845 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.257188 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.257244 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.257261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.257280 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.257293 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.257724 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.268859 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.280766 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.292424 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.305078 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.316092 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.337605 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.351055 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.359210 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.359238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.359247 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.359268 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.359277 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.367417 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.386587 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.398595 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.412139 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.422683 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.432526 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:11Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.461330 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.461379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.461391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.461410 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.461423 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.563817 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.563855 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.563865 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.563878 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.563887 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.666141 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.666177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.666184 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.666199 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.666209 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.759081 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 22:19:43.972467627 +0000 UTC Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.768505 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.768550 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.768559 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.768572 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.768581 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.870916 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.870970 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.870989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.871012 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.871026 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.973483 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.973525 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.973537 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.973552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:11 crc kubenswrapper[4712]: I0130 16:55:11.973565 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:11Z","lastTransitionTime":"2026-01-30T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.077432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.077495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.077520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.077548 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.077570 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.170259 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/2.log" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.170864 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/1.log" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.174138 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e" exitCode=1 Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.174217 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.174285 4712 scope.go:117] "RemoveContainer" containerID="7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.174989 4712 scope.go:117] "RemoveContainer" containerID="875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e" Jan 30 16:55:12 crc kubenswrapper[4712]: E0130 16:55:12.175170 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.184732 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.184759 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.184767 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.184779 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.184788 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.192482 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.212511 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7b65e1d5a9afb9c07cc442b53fde79fd228842e1dd4920247f5584005c49fa22\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:54:57Z\\\",\\\"message\\\":\\\"rk controller: failed to start default network controller: unable to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:54:57Z is after 2025-08-24T17:21:41Z]\\\\nI0130 16:54:57.622996 6091 base_network_controller_pods.go:477] [default/openshift-network-diagnostics/network-check-target-xd92c] creating logical port openshift-network-diagnostics_network-check-target-xd92c for pod on switch crc\\\\nI0130 16:54:57.622937 6091 services_controller.go:452] Built service openshift-machine-api/machine-api-controllers per-node LB for network=default: []services.LB{}\\\\nI0130 16:54:57.623013 6091 ovn.go:134] Ensuring zone local for Pod openshift-network-operator/network-operator-58b4c7f79c-55gtf in node crc\\\\nI0130 16:54:57.623019 6091 services_controller.go:453] Buil\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:56Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.232583 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.247455 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.259735 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.271925 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.288292 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.288349 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.288365 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.288386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.288400 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.289101 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.328509 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.338786 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.350301 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.364746 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.377454 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.389001 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.390600 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.390657 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.390674 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.390697 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.390714 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.399530 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.415263 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.428053 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.441877 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.493421 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.493540 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.493561 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.493638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.493660 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.596154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.596198 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.596212 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.596240 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.596256 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.705533 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.705597 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.705620 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.705667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.705691 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.759860 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:53:44.423891403 +0000 UTC Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.799172 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.799171 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.799370 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:12 crc kubenswrapper[4712]: E0130 16:55:12.799378 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.799411 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:12 crc kubenswrapper[4712]: E0130 16:55:12.799603 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:12 crc kubenswrapper[4712]: E0130 16:55:12.799692 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:12 crc kubenswrapper[4712]: E0130 16:55:12.799948 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.812175 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.812284 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.812305 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.812372 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.812404 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.914859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.914898 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.914909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.914924 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:12 crc kubenswrapper[4712]: I0130 16:55:12.914935 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:12Z","lastTransitionTime":"2026-01-30T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.016870 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.016912 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.016921 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.016935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.016946 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.119103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.119137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.119146 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.119158 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.119166 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.179476 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/2.log" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.182931 4712 scope.go:117] "RemoveContainer" containerID="875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e" Jan 30 16:55:13 crc kubenswrapper[4712]: E0130 16:55:13.183102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.198104 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.211496 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.222147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.222217 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.222232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.222250 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.222264 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.223740 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.237727 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.251916 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.265366 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.280250 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.291439 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.305815 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.319510 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.325935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.325975 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.325985 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.326002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.326014 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.332405 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.345088 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.363034 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.381974 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.396286 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.408529 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.427002 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.428545 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.428582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.428591 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.428606 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.428615 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.531113 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.531185 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.531209 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.531238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.531258 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.633058 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.633106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.633119 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.633137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.633149 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.735657 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.735738 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.735773 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.735819 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.735839 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.760029 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 15:00:25.779731895 +0000 UTC Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.819544 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.833194 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.840617 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.840650 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.840660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.840673 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.840684 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.848715 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.859357 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.873102 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.885571 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.902887 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.921815 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.933106 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.942986 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.943061 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.943070 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.943085 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.943095 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:13Z","lastTransitionTime":"2026-01-30T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.944489 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.954339 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.967527 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.981860 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:13 crc kubenswrapper[4712]: I0130 16:55:13.995908 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.013219 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.045666 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.045712 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.045725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.045744 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.045756 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.050382 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.066670 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.147790 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.147852 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.147866 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.147882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.147895 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.251327 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.251375 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.251386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.251414 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.251426 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.354093 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.354506 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.354668 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.354843 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.355008 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.412100 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:14 crc kubenswrapper[4712]: E0130 16:55:14.412295 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:14 crc kubenswrapper[4712]: E0130 16:55:14.412407 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:55:30.412383158 +0000 UTC m=+67.319392657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.430674 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.444771 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.454032 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.457418 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.457470 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.457489 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.457511 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.457526 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.473258 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.487533 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.497624 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.509822 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.521048 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.538266 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.556407 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.560289 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.560329 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.560341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.560358 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.560369 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.570693 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.584315 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.595824 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.609096 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.620748 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.629211 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.642238 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.652835 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.662187 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.662224 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.662234 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.662250 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.662260 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.662639 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.761119 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 10:51:56.525490069 +0000 UTC Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.765031 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.765068 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.765077 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.765091 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.765101 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.773669 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.788360 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.798846 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.798868 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.798926 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.798998 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:14 crc kubenswrapper[4712]: E0130 16:55:14.799597 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:14 crc kubenswrapper[4712]: E0130 16:55:14.799665 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:14 crc kubenswrapper[4712]: E0130 16:55:14.799735 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:14 crc kubenswrapper[4712]: E0130 16:55:14.800138 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.802728 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.817779 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.832954 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.845858 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.861329 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.867495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.867543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.867557 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.867579 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.867592 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.878600 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.894883 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.913126 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.926617 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.936350 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.949397 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.960152 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.970180 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.970220 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.970231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.970245 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.970257 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:14Z","lastTransitionTime":"2026-01-30T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.973706 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:14 crc kubenswrapper[4712]: I0130 16:55:14.984261 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.003071 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:15Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.024212 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:15Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.042098 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:15Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.072224 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.072271 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.072280 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.072295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.072304 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.174898 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.174962 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.174976 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.174996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.175009 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.219758 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.220090 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:55:47.220053349 +0000 UTC m=+84.127062848 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.277281 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.277339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.277361 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.277386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.277402 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.320608 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.320659 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.320687 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.320714 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320742 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320856 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:47.320831147 +0000 UTC m=+84.227840626 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320873 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320918 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320945 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:47.320927009 +0000 UTC m=+84.227936498 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320946 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320972 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.321009 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:47.320998401 +0000 UTC m=+84.228007960 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.320918 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.321062 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.321072 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:15 crc kubenswrapper[4712]: E0130 16:55:15.321104 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:55:47.321095063 +0000 UTC m=+84.228104632 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.379357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.379404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.379419 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.379437 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.379450 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.482385 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.482448 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.482459 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.482475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.482485 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.585096 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.585147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.585161 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.585221 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.585236 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.687752 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.687828 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.687851 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.687880 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.687902 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.762238 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 23:58:38.447964448 +0000 UTC Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.790946 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.790994 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.791003 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.791016 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.791024 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.893245 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.893281 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.893293 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.893308 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.893318 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.996846 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.997077 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.997087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.997103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:15 crc kubenswrapper[4712]: I0130 16:55:15.997116 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:15Z","lastTransitionTime":"2026-01-30T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.099958 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.099989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.100022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.100037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.100048 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.202369 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.202413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.202425 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.202442 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.202454 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.305163 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.305202 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.305213 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.305229 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.305241 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.407406 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.407470 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.407488 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.407518 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.407537 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.509773 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.509827 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.509836 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.509849 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.509857 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.612036 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.612305 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.612403 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.612481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.612562 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.714921 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.714986 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.715009 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.715039 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.715061 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.762650 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 10:37:36.369251121 +0000 UTC Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.799587 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:16 crc kubenswrapper[4712]: E0130 16:55:16.799769 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.800364 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:16 crc kubenswrapper[4712]: E0130 16:55:16.800508 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.800604 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:16 crc kubenswrapper[4712]: E0130 16:55:16.800719 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.800875 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:16 crc kubenswrapper[4712]: E0130 16:55:16.800997 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.818275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.818347 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.818373 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.818403 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.818426 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.921844 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.921957 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.921969 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.922007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:16 crc kubenswrapper[4712]: I0130 16:55:16.922021 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:16Z","lastTransitionTime":"2026-01-30T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.025231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.025505 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.025604 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.025705 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.025791 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.129052 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.129117 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.129128 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.129150 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.129166 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.231663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.231734 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.231751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.231774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.231791 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.335109 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.335160 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.335171 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.335188 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.335200 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.439217 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.439261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.439269 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.439283 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.439293 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.541578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.541641 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.541657 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.541678 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.541693 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.644182 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.644577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.644725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.644906 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.645058 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.747723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.747770 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.747780 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.747816 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.747832 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.763527 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:48:03.037424755 +0000 UTC Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.850196 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.850539 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.850635 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.850737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.850858 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.953715 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.954147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.954288 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.954475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:17 crc kubenswrapper[4712]: I0130 16:55:17.954601 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:17Z","lastTransitionTime":"2026-01-30T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.056749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.057100 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.057183 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.057259 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.057327 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.159723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.159752 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.159761 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.159773 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.159780 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.262683 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.262717 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.262727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.262741 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.262752 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.365747 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.365881 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.365913 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.365960 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.365984 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.469663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.469745 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.469772 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.469838 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.469868 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.572510 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.572561 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.572574 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.572596 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.572622 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.674622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.674684 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.674701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.674724 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.674740 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.764093 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 06:51:20.635286235 +0000 UTC Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.777018 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.777065 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.777077 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.777095 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.777110 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.798759 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.798848 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:18 crc kubenswrapper[4712]: E0130 16:55:18.798947 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.799006 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.798771 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:18 crc kubenswrapper[4712]: E0130 16:55:18.799188 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:18 crc kubenswrapper[4712]: E0130 16:55:18.799259 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:18 crc kubenswrapper[4712]: E0130 16:55:18.799293 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.880508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.880562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.880575 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.880597 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.880609 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.983195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.983261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.983271 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.983285 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:18 crc kubenswrapper[4712]: I0130 16:55:18.983294 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:18Z","lastTransitionTime":"2026-01-30T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.086586 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.086639 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.086655 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.086678 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.086696 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.194051 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.194128 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.194154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.194184 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.194213 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.296772 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.296869 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.296881 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.296901 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.296918 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.399080 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.399214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.399246 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.399279 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.399307 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.502450 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.502497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.502516 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.502716 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.502739 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.604708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.604741 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.604749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.604762 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.604772 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.833841 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:45:03.570463694 +0000 UTC Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.835742 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.835859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.835888 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.835920 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.835943 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.938475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.938514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.938524 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.938537 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:19 crc kubenswrapper[4712]: I0130 16:55:19.938547 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:19Z","lastTransitionTime":"2026-01-30T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.041182 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.041220 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.041231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.041244 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.041253 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.144480 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.144518 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.144528 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.144543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.144554 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.247880 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.247929 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.247940 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.247956 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.247966 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.350746 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.350846 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.350859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.350880 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.350892 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.452556 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.452598 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.452608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.452625 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.452635 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.555679 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.555722 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.555732 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.555748 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.555759 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.570816 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.570853 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.570863 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.570877 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.570889 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.583909 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.588278 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.588322 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.588330 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.588344 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.588354 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.600832 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.605273 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.605333 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.605351 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.605376 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.605393 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.622324 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.627398 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.627447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.627464 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.627487 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.627504 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.645931 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.649874 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.649920 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.649935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.649955 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.649971 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.671480 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:20Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:20Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.671599 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.673582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.673621 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.673633 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.673648 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.673658 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.777269 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.777339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.777363 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.777392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.777416 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.799049 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.799161 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.799335 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.799628 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.799839 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.799960 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.800150 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:20 crc kubenswrapper[4712]: E0130 16:55:20.800323 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.834540 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 11:45:31.123280968 +0000 UTC Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.881092 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.881237 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.881263 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.881296 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.881322 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.983621 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.983664 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.983676 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.983692 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:20 crc kubenswrapper[4712]: I0130 16:55:20.983703 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:20Z","lastTransitionTime":"2026-01-30T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.087609 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.087677 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.087699 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.087738 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.087758 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.191104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.191418 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.191660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.191986 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.192219 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.295525 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.295562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.295571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.295585 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.295594 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.399893 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.399955 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.399975 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.400001 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.400021 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.505437 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.505759 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.505946 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.506112 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.506271 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.609575 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.609876 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.610196 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.610689 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.611010 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.713743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.714101 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.714208 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.714321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.714430 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.817984 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.818066 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.818086 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.818115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.818225 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.834923 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 00:49:52.121430269 +0000 UTC Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.921486 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.921725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.921927 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.922029 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:21 crc kubenswrapper[4712]: I0130 16:55:21.922123 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:21Z","lastTransitionTime":"2026-01-30T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.024407 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.024452 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.024467 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.024487 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.024504 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.127174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.127242 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.127256 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.127295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.127309 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.230699 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.230756 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.230776 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.230840 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.230859 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.333193 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.333246 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.333264 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.333288 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.333303 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.436115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.436156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.436187 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.436201 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.436211 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.539312 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.539379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.539461 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.539913 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.539981 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.642826 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.643002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.643037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.643065 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.643086 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.750898 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.750952 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.750969 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.750992 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.751075 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.799559 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.799616 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:22 crc kubenswrapper[4712]: E0130 16:55:22.799734 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.799750 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:22 crc kubenswrapper[4712]: E0130 16:55:22.799889 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.799963 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:22 crc kubenswrapper[4712]: E0130 16:55:22.799997 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:22 crc kubenswrapper[4712]: E0130 16:55:22.800102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.835278 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 02:57:28.090404066 +0000 UTC Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.853829 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.853873 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.853885 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.853902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.853915 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.957497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.957567 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.957583 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.957599 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:22 crc kubenswrapper[4712]: I0130 16:55:22.957611 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:22Z","lastTransitionTime":"2026-01-30T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.060530 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.060611 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.060625 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.060650 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.060663 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.163138 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.163211 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.163236 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.163269 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.163292 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.266138 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.266225 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.266248 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.266280 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.266304 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.368760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.368830 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.368841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.368863 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.368872 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.471340 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.471392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.471406 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.471422 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.471434 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.573930 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.573964 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.573973 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.573986 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.573994 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.676848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.676887 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.676900 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.676917 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.676929 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.778861 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.778897 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.778909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.778925 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.778937 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.815550 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.831388 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.837033 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 22:43:58.288148161 +0000 UTC Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.843271 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.859673 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.875557 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.883838 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.883869 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.883879 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.883895 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.883908 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.891483 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.908995 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.923869 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.942214 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.956634 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.979267 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.986562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.986601 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.986613 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.986630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.986641 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:23Z","lastTransitionTime":"2026-01-30T16:55:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:23 crc kubenswrapper[4712]: I0130 16:55:23.995241 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:23Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.008109 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:24Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.028294 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:24Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.042476 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:24Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.054716 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:24Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.063819 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:24Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.073100 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:24Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.088475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.088504 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.088514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.088526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.088535 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.191271 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.191303 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.191311 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.191348 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.191359 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.293851 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.294174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.294310 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.294447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.294568 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.396700 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.397467 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.397603 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.397729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.397898 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.500248 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.500274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.500281 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.500295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.500303 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.603980 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.604365 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.604527 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.604690 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.604878 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.708003 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.708519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.708597 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.708692 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.708771 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.798978 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.799009 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.799061 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:24 crc kubenswrapper[4712]: E0130 16:55:24.799107 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.799166 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:24 crc kubenswrapper[4712]: E0130 16:55:24.799309 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:24 crc kubenswrapper[4712]: E0130 16:55:24.799598 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.800507 4712 scope.go:117] "RemoveContainer" containerID="875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e" Jan 30 16:55:24 crc kubenswrapper[4712]: E0130 16:55:24.800696 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:55:24 crc kubenswrapper[4712]: E0130 16:55:24.800783 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.811883 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.811976 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.811990 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.812018 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.812032 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.838296 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 00:51:02.274470672 +0000 UTC Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.914778 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.915022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.915087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.915147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:24 crc kubenswrapper[4712]: I0130 16:55:24.915204 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:24Z","lastTransitionTime":"2026-01-30T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.017864 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.017928 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.017949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.017978 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.017995 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.120749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.121191 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.121384 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.121589 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.121764 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.224652 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.224710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.224721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.224742 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.224754 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.327444 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.327507 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.327520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.327544 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.327565 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.430271 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.430318 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.430360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.430628 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.430644 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.533351 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.533398 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.533411 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.533429 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.533441 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.636712 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.636778 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.636848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.636875 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.636893 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.740053 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.740130 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.740147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.740171 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.740188 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.839089 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 17:51:26.071055276 +0000 UTC Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.843553 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.843613 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.843650 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.843680 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.843702 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.946024 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.946090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.946120 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.946142 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:25 crc kubenswrapper[4712]: I0130 16:55:25.946160 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:25Z","lastTransitionTime":"2026-01-30T16:55:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.048939 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.048996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.049007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.049021 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.049032 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.151160 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.151488 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.151500 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.151514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.151524 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.254378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.254449 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.254472 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.254500 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.254521 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.358328 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.358388 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.358406 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.358430 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.358451 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.460519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.460567 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.460578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.460593 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.460603 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.563305 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.563360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.563376 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.563399 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.563416 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.665672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.665742 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.665765 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.665791 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.665874 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.768902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.768949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.768967 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.768988 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.769005 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.799018 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:26 crc kubenswrapper[4712]: E0130 16:55:26.799241 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.799547 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:26 crc kubenswrapper[4712]: E0130 16:55:26.799685 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.800006 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:26 crc kubenswrapper[4712]: E0130 16:55:26.800122 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.800790 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:26 crc kubenswrapper[4712]: E0130 16:55:26.800952 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.839715 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 23:06:26.470988795 +0000 UTC Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.872327 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.872391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.872404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.872422 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.872434 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.974962 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.974995 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.975007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.975021 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:26 crc kubenswrapper[4712]: I0130 16:55:26.975029 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:26Z","lastTransitionTime":"2026-01-30T16:55:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.078866 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.078935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.078945 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.078965 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.078979 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.182249 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.182307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.182321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.182340 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.182355 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.284929 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.284978 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.284989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.285008 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.285020 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.387932 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.388205 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.388275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.388367 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.388452 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.491022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.491749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.491895 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.491989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.492067 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.594370 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.594404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.594414 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.594429 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.594439 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.697156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.697216 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.697229 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.697249 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.697261 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.799868 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.800118 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.800178 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.800252 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.800312 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.840323 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:30:34.341323182 +0000 UTC Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.902454 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.902503 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.902521 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.902543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:27 crc kubenswrapper[4712]: I0130 16:55:27.902559 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:27Z","lastTransitionTime":"2026-01-30T16:55:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.004996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.005028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.005035 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.005047 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.005055 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.108101 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.108153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.108169 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.108195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.108214 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.210493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.210836 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.210939 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.211032 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.211116 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.313727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.313772 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.313784 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.313833 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.313851 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.415762 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.415815 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.415826 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.415842 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.415852 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.517652 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.517737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.517752 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.517773 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.517786 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.620130 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.620168 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.620177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.620191 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.620202 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.722189 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.722222 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.722231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.722245 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.722253 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.798956 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.799028 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:28 crc kubenswrapper[4712]: E0130 16:55:28.799067 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.798963 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.798980 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:28 crc kubenswrapper[4712]: E0130 16:55:28.799171 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:28 crc kubenswrapper[4712]: E0130 16:55:28.799248 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:28 crc kubenswrapper[4712]: E0130 16:55:28.799320 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.825022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.825065 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.825078 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.825097 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.825109 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.840656 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:23:21.701681818 +0000 UTC Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.927597 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.927630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.927641 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.927656 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:28 crc kubenswrapper[4712]: I0130 16:55:28.927667 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:28Z","lastTransitionTime":"2026-01-30T16:55:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.030590 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.030904 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.030995 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.031104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.031191 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.133396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.133768 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.133900 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.134006 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.134144 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.237311 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.237345 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.237354 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.237367 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.237376 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.339735 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.340020 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.340099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.340189 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.340259 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.442143 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.442190 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.442199 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.442212 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.442221 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.544414 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.544439 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.544448 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.544460 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.544469 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.647413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.647462 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.647482 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.647504 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.647520 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.750451 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.750491 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.750505 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.750521 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.750533 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.841560 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 18:28:12.672560019 +0000 UTC Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.852252 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.852307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.852321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.852341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.852353 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.962634 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.962688 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.962697 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.962710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:29 crc kubenswrapper[4712]: I0130 16:55:29.962718 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:29Z","lastTransitionTime":"2026-01-30T16:55:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.064598 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.064631 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.064641 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.064653 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.064662 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.167388 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.167424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.167435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.167451 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.167463 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.269259 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.269285 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.269292 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.269304 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.269312 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.371471 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.371502 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.371512 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.371527 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.371537 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.466410 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.466617 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.466818 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:56:02.466784261 +0000 UTC m=+99.373793730 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.473749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.473774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.473782 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.473814 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.473823 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.576091 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.576125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.576138 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.576156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.576167 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.678918 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.678959 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.678971 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.678988 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.679000 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.780999 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.781032 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.781040 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.781054 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.781065 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.799285 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.799396 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.799549 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.799616 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.799732 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.799816 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.799937 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.800068 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.815663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.815831 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.815950 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.816067 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.816165 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.829556 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.833817 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.833850 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.833861 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.833876 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.833887 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.841893 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 05:32:12.076647158 +0000 UTC Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.849574 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.856143 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.856174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.856181 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.856195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.856205 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.869244 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.872160 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.872196 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.872208 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.872223 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.872232 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.884389 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.887647 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.887702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.887712 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.887725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.887734 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.897954 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:30Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:30 crc kubenswrapper[4712]: E0130 16:55:30.898070 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.899606 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.899638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.899673 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.899693 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:30 crc kubenswrapper[4712]: I0130 16:55:30.899705 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:30Z","lastTransitionTime":"2026-01-30T16:55:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.002440 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.002483 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.002491 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.002505 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.002513 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.104447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.104482 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.104495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.104511 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.104522 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.206531 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.206594 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.206611 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.206637 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.206654 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.308303 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.308331 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.308340 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.308353 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.308362 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.410970 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.411014 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.411027 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.411043 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.411054 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.513195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.513232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.513243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.513256 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.513266 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.615496 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.615540 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.615552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.615570 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.615582 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.718360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.718402 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.718419 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.718439 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.718452 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.821003 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.821049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.821059 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.821073 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.821085 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.842688 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:26:07.499496496 +0000 UTC Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.923304 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.923358 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.923379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.923408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:31 crc kubenswrapper[4712]: I0130 16:55:31.923451 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:31Z","lastTransitionTime":"2026-01-30T16:55:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.025393 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.025431 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.025442 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.025458 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.025468 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.127251 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.127285 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.127295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.127311 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.127322 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.230175 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.230214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.230251 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.230269 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.230281 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.245163 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/0.log" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.245197 4712 generic.go:334] "Generic (PLEG): container finished" podID="dcd71c7c-942c-4c29-969e-45d946f356c8" containerID="93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4" exitCode=1 Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.245216 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerDied","Data":"93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.245474 4712 scope.go:117] "RemoveContainer" containerID="93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.261103 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.273162 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.286428 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.297160 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.311244 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.320615 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.342045 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.354205 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.363979 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.364436 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.364454 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.364462 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.364476 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.364485 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.388057 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.402189 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.414180 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.423608 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.432683 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.449046 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.462209 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.466770 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.466825 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.466835 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.466848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.466858 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.473231 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.485932 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:32Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.569247 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.569514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.569605 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.569720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.569839 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.672448 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.672512 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.672526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.672547 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.672558 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.775228 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.775275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.775289 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.775307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.775318 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.798609 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.798665 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.798637 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.798723 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:32 crc kubenswrapper[4712]: E0130 16:55:32.798769 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:32 crc kubenswrapper[4712]: E0130 16:55:32.798891 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:32 crc kubenswrapper[4712]: E0130 16:55:32.798994 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:32 crc kubenswrapper[4712]: E0130 16:55:32.799072 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.843448 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 20:13:50.463382624 +0000 UTC Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.877239 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.877564 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.877697 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.877869 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.877994 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.981342 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.981377 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.981386 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.981400 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:32 crc kubenswrapper[4712]: I0130 16:55:32.981409 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:32Z","lastTransitionTime":"2026-01-30T16:55:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.084101 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.084158 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.084167 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.084180 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.084188 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.187735 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.187781 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.187813 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.187834 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.187845 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.252103 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/0.log" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.252169 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerStarted","Data":"383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.265575 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.277860 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.290380 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.290464 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.290481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.290502 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.290516 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.291073 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.302337 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.318025 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.330815 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.347378 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.360120 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.369411 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.389723 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.393309 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.393341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.393350 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.393365 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.393378 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.403756 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.416107 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.425182 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.435076 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.445191 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.459736 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.470881 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.484248 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.496185 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.496227 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.496237 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.496254 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.496266 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.598533 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.598582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.598595 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.598607 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.598617 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.701507 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.701555 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.701563 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.701576 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.701584 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.803392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.803421 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.803430 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.803445 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.803454 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.817565 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.829833 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.843697 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.844064 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 16:37:37.798822698 +0000 UTC Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.853783 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.866272 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.882415 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.904758 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.906203 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.906568 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.906648 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.906757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.906866 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:33Z","lastTransitionTime":"2026-01-30T16:55:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.923957 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.935852 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.947230 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.958184 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.971983 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.987139 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:33 crc kubenswrapper[4712]: I0130 16:55:33.997582 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:33Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.008761 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.008839 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.008854 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.008873 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.008884 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.013235 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.026147 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.041188 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.054539 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:34Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.113663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.113709 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.113722 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.113739 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.113752 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.215835 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.215885 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.215895 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.215912 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.215924 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.318012 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.318096 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.318106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.318125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.318138 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.420703 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.420997 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.421091 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.421201 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.421365 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.524261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.524302 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.524315 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.524337 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.524349 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.627497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.627548 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.627560 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.627580 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.627592 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.730721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.731446 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.731577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.731659 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.731737 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.798890 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.798890 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.799016 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:34 crc kubenswrapper[4712]: E0130 16:55:34.799098 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.799145 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:34 crc kubenswrapper[4712]: E0130 16:55:34.799044 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:34 crc kubenswrapper[4712]: E0130 16:55:34.799227 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:34 crc kubenswrapper[4712]: E0130 16:55:34.799290 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.834116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.834148 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.834159 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.834172 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.834183 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.844434 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 20:09:16.026378988 +0000 UTC Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.937104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.937156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.937169 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.937192 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:34 crc kubenswrapper[4712]: I0130 16:55:34.937207 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:34Z","lastTransitionTime":"2026-01-30T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.044196 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.044660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.045025 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.045163 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.045291 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.148033 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.148457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.148555 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.148636 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.148705 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.251395 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.251468 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.251477 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.251492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.251504 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.354291 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.354332 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.354341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.354356 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.354365 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.457295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.457338 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.457348 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.457364 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.457374 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.559921 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.559957 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.559966 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.559981 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.559995 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.662987 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.663016 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.663024 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.663037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.663045 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.765212 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.765267 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.765276 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.765289 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.765298 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.801426 4712 scope.go:117] "RemoveContainer" containerID="875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.845462 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:56:20.608170063 +0000 UTC Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.868135 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.868174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.868186 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.868203 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.868215 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.971073 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.971113 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.971126 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.971145 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:35 crc kubenswrapper[4712]: I0130 16:55:35.971157 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:35Z","lastTransitionTime":"2026-01-30T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.074223 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.074297 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.074310 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.074327 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.074337 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.176848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.176876 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.176884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.176897 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.176906 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.262889 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/2.log" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.265478 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.266428 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.279111 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.279139 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.279148 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.279160 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.279169 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.283117 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.296838 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.317860 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.338999 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.361382 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.376925 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.381536 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.381584 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.381595 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.381613 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.381626 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.387430 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.400037 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.410353 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.422566 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.434542 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.454031 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.472129 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.483676 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.483726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.483737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.483750 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.483760 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.487923 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.503691 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.524574 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.547150 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.561284 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:36Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.586155 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.586387 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.586490 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.586596 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.586731 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.688415 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.688626 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.688710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.688815 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.688944 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.791315 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.791349 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.791361 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.791378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.791388 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.798891 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.798928 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.798963 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:36 crc kubenswrapper[4712]: E0130 16:55:36.799102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.799235 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:36 crc kubenswrapper[4712]: E0130 16:55:36.799347 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:36 crc kubenswrapper[4712]: E0130 16:55:36.799504 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:36 crc kubenswrapper[4712]: E0130 16:55:36.799584 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.846313 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 19:47:12.665670509 +0000 UTC Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.894587 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.894650 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.894674 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.894704 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.894737 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.996614 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.996662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.996675 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.996691 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:36 crc kubenswrapper[4712]: I0130 16:55:36.996702 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:36Z","lastTransitionTime":"2026-01-30T16:55:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.099455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.099508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.099522 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.099539 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.099550 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.201882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.201935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.201944 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.201956 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.202046 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.269980 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/3.log" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.270889 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/2.log" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.273468 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" exitCode=1 Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.273516 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.273561 4712 scope.go:117] "RemoveContainer" containerID="875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.274842 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 16:55:37 crc kubenswrapper[4712]: E0130 16:55:37.275118 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.291946 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.304379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.304413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.304422 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.304438 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.304449 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.312045 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.328218 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.339300 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.355423 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.368287 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.382486 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.395407 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.405878 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.405909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.405917 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.405931 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.405939 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.408026 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.417838 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.430704 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.441684 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.455174 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.470265 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.491180 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://875be8ec13c88cdb1b731eb4d86874411fd465a907a96b68697829a8ea15427e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:11Z\\\",\\\"message\\\":\\\"enshift-kube-controller-manager/kube-controller-manager-crc openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-kube-apiserver/kube-apiserver-crc]\\\\nI0130 16:55:11.607023 6304 obj_retry.go:418] Waiting for all the *v1.Pod retry setup to complete in iterateRetryResources\\\\nI0130 16:55:11.607046 6304 obj_retry.go:303] Retry object setup: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.607061 6304 obj_retry.go:365] Adding new object: *v1.Pod openshift-kube-apiserver/kube-apiserver-crc\\\\nI0130 16:55:11.606886 6304 transact.go:42] Configuring OVN: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-kube-scheduler-operator/metrics]} name:Service_openshift-kube-scheduler-operator/metrics_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.4.233:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {1dc899db-4498-4b7a-8437-861940b962e7}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nF0130 16:55:11.607081 6304 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:36Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.712246 6670 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713464 6670 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713557 6670 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713693 6670 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.714040 6670 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.755214 6670 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0130 16:55:36.755270 6670 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0130 16:55:36.755403 6670 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:55:36.755452 6670 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 16:55:36.755594 6670 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.508560 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.508625 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.508639 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.508656 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.508667 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.513840 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.526158 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.537276 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:37Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.610989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.611037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.611053 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.611069 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.611083 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.714389 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.714424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.714432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.714445 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.714455 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.818371 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.818410 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.818420 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.818443 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.818455 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.847302 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:09:58.241394487 +0000 UTC Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.920482 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.920547 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.920557 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.920570 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:37 crc kubenswrapper[4712]: I0130 16:55:37.920580 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:37Z","lastTransitionTime":"2026-01-30T16:55:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.022514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.022564 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.022576 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.022592 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.022940 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.125375 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.125408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.125416 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.125428 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.125437 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.227212 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.227243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.227251 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.227263 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.227272 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.283667 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/3.log" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.287227 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 16:55:38 crc kubenswrapper[4712]: E0130 16:55:38.287359 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.301125 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.312540 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.326817 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.334961 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.335221 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.335367 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.335456 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.335530 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.338036 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.350190 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.364658 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.385289 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.398205 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.409494 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.428729 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:36Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.712246 6670 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713464 6670 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713557 6670 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713693 6670 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.714040 6670 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.755214 6670 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0130 16:55:36.755270 6670 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0130 16:55:36.755403 6670 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:55:36.755452 6670 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 16:55:36.755594 6670 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.437812 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.437859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.437872 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.437889 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.438161 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.443165 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.456729 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.469359 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.481331 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.491086 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.501875 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.513046 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.525833 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:38Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.541461 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.541495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.541504 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.541518 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.541528 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.643545 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.643582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.643593 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.643609 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.643620 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.746514 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.746546 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.746558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.746574 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.746585 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.799049 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.799080 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:38 crc kubenswrapper[4712]: E0130 16:55:38.799173 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.799196 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.799342 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:38 crc kubenswrapper[4712]: E0130 16:55:38.799341 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:38 crc kubenswrapper[4712]: E0130 16:55:38.799408 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:38 crc kubenswrapper[4712]: E0130 16:55:38.799479 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.847591 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 12:15:53.144089773 +0000 UTC Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.849486 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.849628 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.849835 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.849959 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.850061 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.952424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.952763 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.953133 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.953290 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:38 crc kubenswrapper[4712]: I0130 16:55:38.953453 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:38Z","lastTransitionTime":"2026-01-30T16:55:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.056427 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.056481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.056497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.056520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.056536 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.159059 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.159131 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.159153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.159179 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.159201 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.261580 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.261841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.261944 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.262063 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.262144 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.364526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.364836 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.364951 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.365076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.365211 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.468274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.468631 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.469125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.469412 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.469659 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.573007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.573082 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.573099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.573126 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.573143 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.675430 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.675865 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.676080 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.676279 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.676548 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.779444 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.779519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.779543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.779573 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.779596 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.848572 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:57:20.59674089 +0000 UTC Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.882524 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.882624 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.882642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.882667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.882685 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.985339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.985372 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.985380 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.985393 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:39 crc kubenswrapper[4712]: I0130 16:55:39.985402 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:39Z","lastTransitionTime":"2026-01-30T16:55:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.088194 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.088228 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.088237 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.088278 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.088288 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.190177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.190214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.190224 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.190238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.190248 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.292117 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.292217 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.292232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.292264 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.292276 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.395031 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.395066 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.395074 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.395087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.395096 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.498024 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.498061 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.498070 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.498084 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.498092 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.600408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.600459 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.600475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.600494 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.600515 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.704774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.704862 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.704884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.704912 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.704929 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.799111 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.799144 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.799233 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:40 crc kubenswrapper[4712]: E0130 16:55:40.799406 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.799470 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:40 crc kubenswrapper[4712]: E0130 16:55:40.799537 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:40 crc kubenswrapper[4712]: E0130 16:55:40.799672 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:40 crc kubenswrapper[4712]: E0130 16:55:40.799864 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.807864 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.807920 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.807949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.807982 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.808023 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.848737 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 14:44:54.716445951 +0000 UTC Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.911483 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.911558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.911580 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.911604 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:40 crc kubenswrapper[4712]: I0130 16:55:40.911621 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:40Z","lastTransitionTime":"2026-01-30T16:55:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.014608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.014671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.014689 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.014715 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.014734 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.117852 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.117902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.117914 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.117931 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.117944 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.220636 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.220692 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.220708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.220727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.220742 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.224011 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.224084 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.224098 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.224111 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.224121 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: E0130 16:55:41.240698 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.244642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.244687 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.244701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.244720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.244734 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: E0130 16:55:41.257459 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.260837 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.260870 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.260887 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.260903 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.260916 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: E0130 16:55:41.271443 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.274346 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.274383 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.274395 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.274434 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.274448 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: E0130 16:55:41.284996 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.288582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.288616 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.288630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.288647 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.288659 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: E0130 16:55:41.299233 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:41Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:41Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:41 crc kubenswrapper[4712]: E0130 16:55:41.299378 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.323108 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.323137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.323145 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.323158 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.323173 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.426069 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.426132 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.426155 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.426182 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.426205 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.528287 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.528342 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.528356 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.528375 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.528392 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.630909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.630943 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.630952 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.630965 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.630973 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.732942 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.733002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.733019 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.733041 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.733057 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.835583 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.835648 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.835665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.835688 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.835705 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.848978 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:27:28.324190363 +0000 UTC Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.938667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.938729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.938762 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.938790 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:41 crc kubenswrapper[4712]: I0130 16:55:41.938849 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:41Z","lastTransitionTime":"2026-01-30T16:55:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.042425 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.042603 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.042636 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.042669 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.042692 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.145833 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.145881 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.145894 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.145911 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.145921 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.248034 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.248104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.248116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.248133 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.248144 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.350618 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.350669 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.350681 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.350698 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.350716 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.452480 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.452539 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.452554 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.452570 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.452582 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.555072 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.555106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.555115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.555129 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.555138 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.657713 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.657743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.657752 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.657765 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.657773 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.760708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.760777 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.760825 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.760850 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.760869 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.799158 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.799236 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.799266 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.799355 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:42 crc kubenswrapper[4712]: E0130 16:55:42.799531 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:42 crc kubenswrapper[4712]: E0130 16:55:42.799725 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:42 crc kubenswrapper[4712]: E0130 16:55:42.799945 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:42 crc kubenswrapper[4712]: E0130 16:55:42.800102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.849626 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:22:18.692665551 +0000 UTC Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.864432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.864487 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.864504 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.864529 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.864548 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.967240 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.967286 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.967299 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.967316 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:42 crc kubenswrapper[4712]: I0130 16:55:42.967329 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:42Z","lastTransitionTime":"2026-01-30T16:55:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.070244 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.070315 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.070351 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.070491 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.070516 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.173919 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.173981 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.173995 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.174013 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.174025 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.277247 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.277332 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.277353 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.277378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.277394 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.382034 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.382104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.382125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.382156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.382179 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.484491 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.484536 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.484549 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.484566 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.484578 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.586762 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.586817 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.586829 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.586844 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.586858 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.689001 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.689072 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.689090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.689116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.689136 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.791562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.791612 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.791624 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.791644 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.791656 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.830319 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.847540 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.849768 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:39:11.267776503 +0000 UTC Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.865323 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.894180 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.894660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.895098 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.895420 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.895562 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.901679 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:36Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.712246 6670 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713464 6670 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713557 6670 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713693 6670 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.714040 6670 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.755214 6670 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0130 16:55:36.755270 6670 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0130 16:55:36.755403 6670 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:55:36.755452 6670 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 16:55:36.755594 6670 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.919638 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.941469 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.957157 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.971658 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.986072 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.998048 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.998363 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.998518 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.998634 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:43 crc kubenswrapper[4712]: I0130 16:55:43.998754 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:43Z","lastTransitionTime":"2026-01-30T16:55:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.000990 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:43Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.022247 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.035958 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.050097 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.063199 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.078773 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.091990 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.100523 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.100552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.100562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.100576 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.100584 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.108118 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.118209 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:44Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.203616 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.203677 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.203691 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.203710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.203722 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.306894 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.307311 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.307330 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.307350 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.307365 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.409508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.409733 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.409818 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.409894 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.409955 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.512358 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.512426 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.512438 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.512454 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.512466 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.614162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.614409 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.614504 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.614587 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.614668 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.717112 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.717153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.717162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.717174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.717184 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.799619 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.799654 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.799777 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:44 crc kubenswrapper[4712]: E0130 16:55:44.799928 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:44 crc kubenswrapper[4712]: E0130 16:55:44.800092 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:44 crc kubenswrapper[4712]: E0130 16:55:44.800197 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.800436 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:44 crc kubenswrapper[4712]: E0130 16:55:44.800908 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.819602 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.819665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.819689 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.819719 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.819740 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.852185 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 11:49:09.448845983 +0000 UTC Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.922159 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.922221 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.922243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.922270 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:44 crc kubenswrapper[4712]: I0130 16:55:44.922290 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:44Z","lastTransitionTime":"2026-01-30T16:55:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.024391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.024499 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.024526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.024555 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.024576 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.127215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.127271 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.127290 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.127313 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.127329 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.230842 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.230900 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.230916 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.230939 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.230956 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.334219 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.334275 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.334294 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.334319 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.334344 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.437210 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.437276 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.437293 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.437317 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.437336 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.540151 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.540224 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.540243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.540266 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.540286 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.642679 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.642720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.642729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.642743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.642755 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.745840 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.745883 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.745924 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.745942 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.745952 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.848671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.848717 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.848728 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.848744 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.848756 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.853089 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:23:28.377918447 +0000 UTC Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.950907 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.950950 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.950962 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.950979 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:45 crc kubenswrapper[4712]: I0130 16:55:45.950992 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:45Z","lastTransitionTime":"2026-01-30T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.053292 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.053328 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.053337 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.053350 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.053359 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.156466 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.156533 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.156543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.156561 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.156575 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.259075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.259129 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.259139 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.259153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.259164 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.362274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.362321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.362334 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.362354 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.362368 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.464847 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.464896 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.464906 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.464918 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.464928 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.568037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.568089 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.568100 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.568118 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.568134 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.670499 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.670571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.670596 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.670627 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.670652 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.773508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.773568 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.773580 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.773601 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.773615 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.798869 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.798906 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.798911 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:46 crc kubenswrapper[4712]: E0130 16:55:46.798995 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:46 crc kubenswrapper[4712]: E0130 16:55:46.799100 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:46 crc kubenswrapper[4712]: E0130 16:55:46.799282 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.799423 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:46 crc kubenswrapper[4712]: E0130 16:55:46.799486 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.854078 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 03:26:37.767764958 +0000 UTC Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.876401 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.876485 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.876500 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.876519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.876531 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.979991 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.980042 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.980053 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.980072 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:46 crc kubenswrapper[4712]: I0130 16:55:46.980085 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:46Z","lastTransitionTime":"2026-01-30T16:55:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.083163 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.083255 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.083356 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.083427 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.083445 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.186423 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.186610 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.186630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.186654 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.186670 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.244778 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.244900 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.244878619 +0000 UTC m=+148.151888088 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.289685 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.289735 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.289751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.289774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.289792 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.346117 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.346177 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.346206 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.346229 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346256 4712 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346345 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.346319005 +0000 UTC m=+148.253328494 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346361 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346377 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346390 4712 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346421 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.346411727 +0000 UTC m=+148.253421196 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346505 4712 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346603 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.34658226 +0000 UTC m=+148.253591789 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346517 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346645 4712 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346658 4712 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:47 crc kubenswrapper[4712]: E0130 16:55:47.346699 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.346689131 +0000 UTC m=+148.253698670 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.392492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.392591 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.392608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.393044 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.393101 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.496224 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.496305 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.496324 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.496402 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.496430 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.598662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.598699 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.598710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.598725 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.598736 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.701532 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.701585 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.701597 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.701616 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.701631 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.803991 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.804040 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.804056 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.804075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.804090 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.854543 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 13:32:51.460716017 +0000 UTC Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.906666 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.906731 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.906757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.906789 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:47 crc kubenswrapper[4712]: I0130 16:55:47.906849 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:47Z","lastTransitionTime":"2026-01-30T16:55:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.009722 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.009761 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.009772 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.009790 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.009833 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.112840 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.112882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.112895 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.112911 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.112924 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.215947 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.216000 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.216012 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.216031 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.216044 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.318635 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.318672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.318684 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.318701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.318713 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.422583 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.422728 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.422760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.422828 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.422858 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.525654 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.525708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.525720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.525815 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.525830 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.627936 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.627966 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.627974 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.627987 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.628184 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.731308 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.731371 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.731387 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.731410 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.731429 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.798897 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.798955 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.798966 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.798917 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:48 crc kubenswrapper[4712]: E0130 16:55:48.799071 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:48 crc kubenswrapper[4712]: E0130 16:55:48.799204 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:48 crc kubenswrapper[4712]: E0130 16:55:48.799367 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:48 crc kubenswrapper[4712]: E0130 16:55:48.799428 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.834026 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.834109 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.834132 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.834158 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.834179 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.855170 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 23:39:14.551134575 +0000 UTC Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.936638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.936687 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.936701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.936716 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:48 crc kubenswrapper[4712]: I0130 16:55:48.936728 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:48Z","lastTransitionTime":"2026-01-30T16:55:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.039170 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.039240 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.039259 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.039284 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.039303 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.142436 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.142757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.142896 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.142989 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.143091 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.245664 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.245723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.245740 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.245763 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.245778 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.348298 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.348349 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.348361 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.348379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.348391 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.450904 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.450975 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.450996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.451042 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.451060 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.554193 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.554463 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.554558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.554637 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.554723 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.657246 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.657339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.657361 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.657389 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.657403 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.759999 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.760044 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.760060 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.760081 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.760097 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.800518 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 16:55:49 crc kubenswrapper[4712]: E0130 16:55:49.801047 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.855551 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:22:01.86522405 +0000 UTC Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.862818 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.862980 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.863043 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.863131 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.863204 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.965728 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.966167 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.966312 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.966444 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:49 crc kubenswrapper[4712]: I0130 16:55:49.966563 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:49Z","lastTransitionTime":"2026-01-30T16:55:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.069086 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.069345 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.069424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.069488 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.069584 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.172191 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.172241 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.172256 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.172274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.172285 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.275341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.275377 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.275385 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.275404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.275415 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.378991 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.379058 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.379076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.379099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.379117 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.482022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.482418 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.482524 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.482646 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.482741 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.586213 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.586498 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.586582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.586667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.586777 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.689838 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.689909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.689923 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.689943 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.689955 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.793146 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.793237 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.793259 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.793297 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.793322 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.799230 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.799285 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.799400 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:50 crc kubenswrapper[4712]: E0130 16:55:50.799688 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.799785 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:50 crc kubenswrapper[4712]: E0130 16:55:50.800029 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:50 crc kubenswrapper[4712]: E0130 16:55:50.800653 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:50 crc kubenswrapper[4712]: E0130 16:55:50.800467 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.856617 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 12:38:53.704383141 +0000 UTC Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.896089 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.896440 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.896525 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.896669 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:50 crc kubenswrapper[4712]: I0130 16:55:50.896776 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:50Z","lastTransitionTime":"2026-01-30T16:55:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.000829 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.000892 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.000902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.000924 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.000938 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.104087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.104146 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.104164 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.104188 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.104201 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.208002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.208082 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.208099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.208121 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.208137 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.311976 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.312034 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.312046 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.312066 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.312079 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.421664 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.421710 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.421727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.421747 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.421761 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.524884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.524937 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.524956 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.524979 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.524996 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.546146 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.546215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.546227 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.546248 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.546260 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: E0130 16:55:51.563855 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.568950 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.569094 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.569195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.569288 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.569377 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: E0130 16:55:51.586724 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.592135 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.592531 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.592665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.592746 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.592850 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: E0130 16:55:51.606966 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.611098 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.611183 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.611240 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.611392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.611457 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: E0130 16:55:51.624923 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.628699 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.628744 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.628757 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.628773 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.628785 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: E0130 16:55:51.648196 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:51Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:51 crc kubenswrapper[4712]: E0130 16:55:51.648499 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.650092 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.650114 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.650122 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.650135 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.650143 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.752147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.752209 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.752228 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.752251 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.752264 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.855392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.855453 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.855470 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.855492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.855509 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.857520 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 03:50:44.241698467 +0000 UTC Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.957554 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.957602 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.957616 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.957631 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:51 crc kubenswrapper[4712]: I0130 16:55:51.957643 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:51Z","lastTransitionTime":"2026-01-30T16:55:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.060162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.060222 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.060234 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.060251 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.060263 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.163187 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.163455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.163589 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.163711 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.163850 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.266723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.267002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.267128 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.267232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.267325 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.369575 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.369624 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.369636 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.369652 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.369668 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.473477 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.473536 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.473554 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.473577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.473591 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.576785 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.576841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.576853 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.576869 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.576881 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.678675 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.678718 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.678729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.678746 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.678757 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.781642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.781727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.781744 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.781761 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.781773 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.799109 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.799160 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.799131 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.799128 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:52 crc kubenswrapper[4712]: E0130 16:55:52.799274 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:52 crc kubenswrapper[4712]: E0130 16:55:52.799697 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:52 crc kubenswrapper[4712]: E0130 16:55:52.799841 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:52 crc kubenswrapper[4712]: E0130 16:55:52.799929 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.858203 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:04:14.816445064 +0000 UTC Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.884392 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.884456 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.884474 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.884536 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.884553 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.987003 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.987046 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.987057 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.987075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:52 crc kubenswrapper[4712]: I0130 16:55:52.987086 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:52Z","lastTransitionTime":"2026-01-30T16:55:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.090088 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.090150 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.090167 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.090190 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.090209 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.192553 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.192602 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.192615 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.192701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.192717 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.294878 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.294943 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.294966 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.294980 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.294988 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.401781 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.401902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.401929 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.402264 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.402316 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.505408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.505463 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.505480 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.505503 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.505519 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.608606 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.608660 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.608676 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.608698 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.608718 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.711321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.711371 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.711388 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.711411 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.711428 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.813982 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.814040 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.814062 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.814087 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.814108 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.821012 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.838318 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.858352 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 16:25:42.573761777 +0000 UTC Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.862978 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.881495 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.907233 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.917147 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.917192 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.917204 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.917220 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.917233 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:53Z","lastTransitionTime":"2026-01-30T16:55:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.922411 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.943257 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:36Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.712246 6670 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713464 6670 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713557 6670 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713693 6670 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.714040 6670 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.755214 6670 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0130 16:55:36.755270 6670 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0130 16:55:36.755403 6670 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:55:36.755452 6670 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 16:55:36.755594 6670 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.965416 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.981268 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:53 crc kubenswrapper[4712]: I0130 16:55:53.995768 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:53Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.013638 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.019461 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.019494 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.019505 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.019522 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.019533 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.034599 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.047475 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.058290 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.070235 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.083471 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.095191 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.109470 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:55:54Z is after 2025-08-24T17:21:41Z" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.122375 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.122424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.122435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.122453 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.122465 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.224388 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.224506 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.224521 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.224541 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.224554 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.326498 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.326574 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.326587 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.326606 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.326620 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.429238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.429268 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.429293 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.429305 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.429313 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.531443 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.531484 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.531493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.531508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.531520 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.633950 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.633998 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.634007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.634023 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.634034 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.739030 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.739111 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.739130 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.739152 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.739170 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.799615 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.799666 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.799689 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.799616 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:54 crc kubenswrapper[4712]: E0130 16:55:54.799888 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:54 crc kubenswrapper[4712]: E0130 16:55:54.799993 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:54 crc kubenswrapper[4712]: E0130 16:55:54.800144 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:54 crc kubenswrapper[4712]: E0130 16:55:54.800252 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.843125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.843181 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.843203 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.843231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.843255 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.858531 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:30:21.33173653 +0000 UTC Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.945949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.946016 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.946050 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.946079 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:54 crc kubenswrapper[4712]: I0130 16:55:54.946099 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:54Z","lastTransitionTime":"2026-01-30T16:55:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.050326 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.050390 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.050416 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.050449 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.050472 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.153535 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.153577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.153593 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.153619 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.153636 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.256882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.257064 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.257093 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.257122 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.257142 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.359721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.360037 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.360054 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.360074 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.360085 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.462479 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.462560 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.462578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.462608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.462625 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.565330 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.565381 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.565394 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.565413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.565426 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.669013 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.669096 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.669115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.669139 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.669156 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.772174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.772227 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.772246 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.772269 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.772287 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.812190 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.858674 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 17:14:45.388950041 +0000 UTC Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.875532 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.875596 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.875613 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.875639 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.875658 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.979098 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.979274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.979295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.979323 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:55 crc kubenswrapper[4712]: I0130 16:55:55.979340 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:55Z","lastTransitionTime":"2026-01-30T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.082153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.082214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.082230 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.082252 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.082268 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.184627 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.184717 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.184732 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.184749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.184761 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.287694 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.287751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.287762 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.287775 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.287783 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.390041 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.390076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.390084 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.390097 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.390128 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.494170 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.494216 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.494233 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.494258 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.494275 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.597923 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.597986 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.598007 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.598036 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.598059 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.701049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.701182 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.701194 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.701243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.701259 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.799544 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:56 crc kubenswrapper[4712]: E0130 16:55:56.799684 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.799784 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.799872 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.799882 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:56 crc kubenswrapper[4712]: E0130 16:55:56.799989 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:56 crc kubenswrapper[4712]: E0130 16:55:56.800231 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:56 crc kubenswrapper[4712]: E0130 16:55:56.800382 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.803703 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.803748 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.803760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.803774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.803785 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.859039 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:03:53.248927139 +0000 UTC Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.906174 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.906225 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.906239 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.906257 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:56 crc kubenswrapper[4712]: I0130 16:55:56.906271 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:56Z","lastTransitionTime":"2026-01-30T16:55:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.008445 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.008499 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.008509 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.008521 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.008530 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.110258 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.110317 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.110327 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.110341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.110352 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.213057 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.213102 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.213115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.213136 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.213148 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.315749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.315806 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.315820 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.315836 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.315850 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.418024 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.418106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.418122 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.418180 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.418194 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.520271 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.520360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.520380 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.520402 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.520420 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.623719 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.623784 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.623848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.623875 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.623893 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.727490 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.727571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.727598 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.727628 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.727651 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.830089 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.830132 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.830142 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.830159 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.830171 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.859201 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 16:05:23.838581201 +0000 UTC Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.932602 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.932702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.932718 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.932745 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:57 crc kubenswrapper[4712]: I0130 16:55:57.932763 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:57Z","lastTransitionTime":"2026-01-30T16:55:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.035552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.035615 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.035627 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.035643 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.035653 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.138507 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.138586 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.138608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.138635 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.138652 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.241062 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.241117 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.241125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.241138 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.241148 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.343662 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.343701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.343711 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.343726 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.343737 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.446675 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.446729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.446743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.446760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.446772 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.549478 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.549517 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.549528 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.549545 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.549557 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.652264 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.652341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.652374 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.652402 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.652422 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.755783 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.755930 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.755951 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.755979 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.756000 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.798625 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.798625 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.798869 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.798965 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:55:58 crc kubenswrapper[4712]: E0130 16:55:58.799128 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:55:58 crc kubenswrapper[4712]: E0130 16:55:58.799228 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:55:58 crc kubenswrapper[4712]: E0130 16:55:58.799304 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:55:58 crc kubenswrapper[4712]: E0130 16:55:58.799345 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.859036 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.859108 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.859124 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.859156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.859174 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.859369 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:47:04.623579269 +0000 UTC Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.961959 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.962034 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.962050 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.962072 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:58 crc kubenswrapper[4712]: I0130 16:55:58.962086 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:58Z","lastTransitionTime":"2026-01-30T16:55:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.064787 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.064867 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.064877 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.064892 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.064904 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.166862 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.166907 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.166920 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.166936 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.166948 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.270094 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.270153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.270169 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.270193 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.270210 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.373734 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.374056 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.374067 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.374083 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.374092 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.477161 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.477209 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.477219 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.477236 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.477246 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.579350 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.579379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.579390 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.579405 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.579415 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.681760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.681789 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.681812 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.681825 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.681833 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.784354 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.784465 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.784491 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.784518 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.784538 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.859879 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 02:29:04.803552384 +0000 UTC Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.886945 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.887042 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.887058 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.887080 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.887096 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.989215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.989252 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.989261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.989273 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:55:59 crc kubenswrapper[4712]: I0130 16:55:59.989283 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:55:59Z","lastTransitionTime":"2026-01-30T16:55:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.091978 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.092021 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.092034 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.092053 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.092067 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.194987 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.195062 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.195075 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.195092 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.195127 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.297492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.297538 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.297550 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.297573 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.297584 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.400654 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.400695 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.400705 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.400721 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.400732 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.502419 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.502492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.502507 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.502525 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.502537 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.605041 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.605092 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.605103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.605119 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.605131 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.706891 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.706929 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.706938 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.706951 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.706960 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.799145 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.799251 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.799324 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:00 crc kubenswrapper[4712]: E0130 16:56:00.799447 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.799371 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:00 crc kubenswrapper[4712]: E0130 16:56:00.799346 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:00 crc kubenswrapper[4712]: E0130 16:56:00.799582 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:00 crc kubenswrapper[4712]: E0130 16:56:00.799621 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.809403 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.809454 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.809471 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.809493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.809513 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.860944 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 00:02:51.128662286 +0000 UTC Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.911947 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.911990 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.912000 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.912014 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:00 crc kubenswrapper[4712]: I0130 16:56:00.912024 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:00Z","lastTransitionTime":"2026-01-30T16:56:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.014575 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.014655 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.014665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.014679 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.014688 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.117842 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.117888 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.117900 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.117918 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.117931 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.220019 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.220083 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.220096 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.220113 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.220126 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.323718 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.323749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.323758 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.323770 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.323778 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.426561 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.426627 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.426672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.426704 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.426721 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.528893 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.528977 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.528990 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.529008 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.529021 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.631433 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.631480 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.631492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.631508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.631518 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.734062 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.734097 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.734105 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.734119 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.734128 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.836477 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.836517 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.836528 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.836543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.836551 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.862059 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 17:26:56.872225038 +0000 UTC Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.938590 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.938614 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.938622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.938634 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.938642 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.988413 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.988448 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.988456 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.988469 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:01 crc kubenswrapper[4712]: I0130 16:56:01.988479 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:01Z","lastTransitionTime":"2026-01-30T16:56:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.002692 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.006083 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.006236 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.006318 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.006401 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.006480 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.017952 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.021008 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.021058 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.021085 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.021098 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.021107 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.033433 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.037155 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.037228 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.037242 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.037260 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.037272 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.052634 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.055976 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.056001 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.056010 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.056022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.056031 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.067076 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:02Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.067195 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.068683 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.068715 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.068727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.068743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.068754 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.171011 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.171070 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.171082 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.171100 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.171118 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.273481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.273535 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.273545 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.273558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.273568 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.375052 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.375084 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.375092 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.375104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.375114 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.478283 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.478320 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.478329 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.478342 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.478350 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.510823 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.511014 4712 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.511083 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs podName:abacbc6e-6514-4db6-80b5-23570952c86f nodeName:}" failed. No retries permitted until 2026-01-30 16:57:06.511066248 +0000 UTC m=+163.418075717 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs") pod "network-metrics-daemon-lpb6h" (UID: "abacbc6e-6514-4db6-80b5-23570952c86f") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.580907 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.580935 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.580943 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.580956 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.580964 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.683657 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.683723 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.683740 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.683764 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.683781 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.786623 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.786670 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.786681 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.786697 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.786711 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.799178 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.799244 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.799295 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.799337 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.799272 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.799504 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.799552 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:02 crc kubenswrapper[4712]: E0130 16:56:02.799619 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.862204 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:17:21.692727889 +0000 UTC Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.889270 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.889298 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.889308 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.889322 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.889333 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.991328 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.991391 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.991405 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.991635 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:02 crc kubenswrapper[4712]: I0130 16:56:02.991654 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:02Z","lastTransitionTime":"2026-01-30T16:56:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.093469 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.093522 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.093534 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.093552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.093563 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.195702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.195739 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.195750 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.195768 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.195780 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.298125 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.298169 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.298182 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.298196 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.298208 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.400188 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.400291 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.400305 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.400326 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.400340 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.503844 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.503891 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.503903 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.503926 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.503938 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.606396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.606435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.606445 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.606478 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.606492 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.708321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.708357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.708365 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.708378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.708387 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.810778 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.810852 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.810866 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.810882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.810894 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.816278 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.828403 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.843936 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.859407 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.862896 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:28:57.445907214 +0000 UTC Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.872832 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.885962 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.902761 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.914532 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.914699 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.914734 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.914741 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.914756 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.914765 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:03Z","lastTransitionTime":"2026-01-30T16:56:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.927316 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.940114 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.950344 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.961271 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.977357 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:03 crc kubenswrapper[4712]: I0130 16:56:03.987814 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.000610 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ee3a199-4fce-4e8b-bf6d-a8a4a31e6592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a77d9ecb01962b110c243f6cbe7afa7e35ff46587ae5f521e5c0b7d833fe84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cf1694ebc230620e715e416388ffe9e9224ba48349257de31e4f68c535b99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cf1694ebc230620e715e416388ffe9e9224ba48349257de31e4f68c535b99b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:03Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.014161 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.016572 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.016608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.016622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.016636 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.016648 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.026918 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.047291 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:36Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.712246 6670 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713464 6670 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713557 6670 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713693 6670 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.714040 6670 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.755214 6670 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0130 16:55:36.755270 6670 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0130 16:55:36.755403 6670 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:55:36.755452 6670 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 16:55:36.755594 6670 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.064992 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:04Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.119119 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.119169 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.119183 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.119213 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.119228 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.221737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.221834 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.221850 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.221873 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.221913 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.325243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.325282 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.325292 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.325307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.325317 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.427526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.427827 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.427939 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.428045 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.428124 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.532974 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.533014 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.533025 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.533043 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.533057 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.636110 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.636177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.636187 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.636222 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.636234 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.737666 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.737704 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.737714 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.737729 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.737740 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.799191 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.799223 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.799231 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.799304 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:04 crc kubenswrapper[4712]: E0130 16:56:04.799730 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:04 crc kubenswrapper[4712]: E0130 16:56:04.799887 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:04 crc kubenswrapper[4712]: E0130 16:56:04.799982 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.800023 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 16:56:04 crc kubenswrapper[4712]: E0130 16:56:04.800139 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:04 crc kubenswrapper[4712]: E0130 16:56:04.800164 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\"" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.839531 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.839589 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.839606 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.839629 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.839647 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.863029 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 14:38:31.472014184 +0000 UTC Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.941367 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.941432 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.941457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.941486 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:04 crc kubenswrapper[4712]: I0130 16:56:04.941511 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:04Z","lastTransitionTime":"2026-01-30T16:56:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.044457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.044498 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.044511 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.044527 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.044539 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.147274 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.147315 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.147324 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.147337 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.147345 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.250102 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.250156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.250167 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.250183 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.250194 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.353040 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.353099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.353112 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.353132 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.353148 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.455760 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.455823 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.455837 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.455856 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.455866 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.557966 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.557997 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.558005 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.558016 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.558027 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.660482 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.660541 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.660558 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.660581 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.660598 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.763524 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.763570 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.763582 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.763595 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.763606 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.863860 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 05:08:14.819270445 +0000 UTC Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.865562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.865623 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.865647 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.865677 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.865701 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.968423 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.968491 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.968508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.968531 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:05 crc kubenswrapper[4712]: I0130 16:56:05.968548 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:05Z","lastTransitionTime":"2026-01-30T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.071247 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.071330 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.071342 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.071360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.071372 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.173875 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.173932 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.173949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.173972 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.173993 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.276691 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.276733 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.276761 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.276776 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.276786 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.379168 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.379245 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.379339 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.379374 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.379401 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.482853 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.482890 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.482902 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.482918 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.482931 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.587344 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.587430 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.587455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.587483 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.587504 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.690140 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.690209 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.690290 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.690333 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.690361 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.795555 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.795684 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.795700 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.795720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.795735 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.798946 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.799020 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.798943 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:06 crc kubenswrapper[4712]: E0130 16:56:06.799077 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:06 crc kubenswrapper[4712]: E0130 16:56:06.799166 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:06 crc kubenswrapper[4712]: E0130 16:56:06.799422 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.799454 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:06 crc kubenswrapper[4712]: E0130 16:56:06.799955 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.864676 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 07:05:18.348663409 +0000 UTC Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.898396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.898441 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.898455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.898474 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:06 crc kubenswrapper[4712]: I0130 16:56:06.898489 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:06Z","lastTransitionTime":"2026-01-30T16:56:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.001190 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.001223 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.001232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.001247 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.001256 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.103885 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.103927 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.103940 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.103955 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.103966 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.206730 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.206773 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.206782 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.206819 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.206830 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.309571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.309615 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.309628 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.309644 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.309656 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.412211 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.412269 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.412282 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.412304 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.412316 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.514966 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.515028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.515045 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.515071 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.515088 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.617765 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.617834 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.617844 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.617858 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.617868 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.720480 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.720540 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.720557 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.720578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.720590 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.823189 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.823223 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.823233 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.823262 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.823271 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.865857 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 07:33:32.166255595 +0000 UTC Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.925560 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.925606 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.925617 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.925634 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:07 crc kubenswrapper[4712]: I0130 16:56:07.925647 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:07Z","lastTransitionTime":"2026-01-30T16:56:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.027778 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.027836 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.027848 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.027865 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.027876 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.130215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.130256 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.130267 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.130283 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.130294 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.231965 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.232021 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.232039 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.232062 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.232080 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.334162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.334208 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.334220 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.334235 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.334246 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.437426 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.437488 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.437502 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.437522 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.437535 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.539868 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.539938 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.539949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.539970 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.539981 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.642327 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.642373 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.642402 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.642418 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.642429 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.745488 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.745533 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.745550 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.745572 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.745588 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.799262 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.799314 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.799385 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:08 crc kubenswrapper[4712]: E0130 16:56:08.799407 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.799275 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:08 crc kubenswrapper[4712]: E0130 16:56:08.799529 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:08 crc kubenswrapper[4712]: E0130 16:56:08.799685 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:08 crc kubenswrapper[4712]: E0130 16:56:08.799757 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.848177 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.848218 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.848230 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.848248 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.848261 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.866921 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 22:25:23.515584303 +0000 UTC Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.950639 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.950703 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.950727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.950748 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:08 crc kubenswrapper[4712]: I0130 16:56:08.950764 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:08Z","lastTransitionTime":"2026-01-30T16:56:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.053749 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.053830 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.053841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.053854 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.053864 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.156683 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.156733 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.156767 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.156791 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.156894 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.260457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.260523 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.260548 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.260577 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.260601 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.363834 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.363880 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.363890 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.363907 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.363919 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.466685 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.466758 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.466782 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.466852 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.466874 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.569458 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.569489 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.569497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.569508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.569520 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.673429 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.673554 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.673574 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.673638 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.673659 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.777393 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.777444 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.777458 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.777481 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.777492 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.867322 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 10:28:36.335056983 +0000 UTC Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.881044 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.881085 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.881097 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.881115 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.881127 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.984884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.984947 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.984959 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.984979 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:09 crc kubenswrapper[4712]: I0130 16:56:09.984991 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:09Z","lastTransitionTime":"2026-01-30T16:56:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.087707 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.087743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.087754 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.087808 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.087823 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.190484 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.190526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.190548 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.190572 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.190588 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.294251 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.294315 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.294333 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.294359 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.294380 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.397162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.397214 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.397232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.397256 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.397273 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.499776 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.499878 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.499901 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.499930 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.499954 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.603496 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.603550 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.603563 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.603578 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.603590 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.705700 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.705767 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.705783 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.705846 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.705860 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.799603 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.799624 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.799713 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.799718 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:10 crc kubenswrapper[4712]: E0130 16:56:10.799832 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:10 crc kubenswrapper[4712]: E0130 16:56:10.799935 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:10 crc kubenswrapper[4712]: E0130 16:56:10.800062 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:10 crc kubenswrapper[4712]: E0130 16:56:10.800125 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.808937 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.809002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.809015 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.809033 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.809046 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.867824 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 12:17:38.94599706 +0000 UTC Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.911896 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.911949 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.911963 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.911979 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:10 crc kubenswrapper[4712]: I0130 16:56:10.911991 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:10Z","lastTransitionTime":"2026-01-30T16:56:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.014455 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.014495 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.014504 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.014519 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.014529 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.116534 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.116565 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.116576 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.116591 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.116601 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.220181 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.220236 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.220248 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.220268 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.220280 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.322889 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.322945 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.322977 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.323000 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.323016 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.425191 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.425261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.425298 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.425331 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.425356 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.527680 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.527716 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.527727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.527743 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.527753 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.629788 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.629859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.629869 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.629884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.629893 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.732551 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.732608 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.732626 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.732649 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.732665 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.836055 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.836084 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.836092 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.836104 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.836114 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.868184 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:46:41.542668306 +0000 UTC Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.938728 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.938766 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.938776 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.938789 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:11 crc kubenswrapper[4712]: I0130 16:56:11.938815 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:11Z","lastTransitionTime":"2026-01-30T16:56:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.042286 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.042371 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.042389 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.042408 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.042420 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.145061 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.145108 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.145121 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.145136 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.145148 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.248206 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.248248 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.248614 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.248642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.248659 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.325421 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.325463 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.325475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.325497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.325510 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.343084 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.348302 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.348360 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.348378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.348400 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.348418 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.365992 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.369588 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.369646 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.369663 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.369686 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.369703 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.389004 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.427517 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.427645 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.427665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.427690 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.427875 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.445225 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.448936 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.448975 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.448988 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.449003 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.449015 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.466130 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24148068Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24608868Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-30T16:56:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"186bde97-c593-497a-8d99-0cd60600c22e\\\",\\\"systemUUID\\\":\\\"096c9b47-6024-413f-8880-1431e038a7d7\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:12Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.466242 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.467877 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.467909 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.467918 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.467934 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.467943 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.571344 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.571398 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.571415 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.571435 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.571450 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.674154 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.674190 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.674201 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.674216 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.674226 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.777516 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.777559 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.777571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.777587 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.777599 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.799257 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.799327 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.799374 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.799379 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.799441 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.799507 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.799614 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:12 crc kubenswrapper[4712]: E0130 16:56:12.799666 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.868847 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 02:22:08.204770451 +0000 UTC Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.879693 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.879733 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.879744 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.879763 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.879773 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.981745 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.981788 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.981842 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.981857 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:12 crc kubenswrapper[4712]: I0130 16:56:12.981867 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:12Z","lastTransitionTime":"2026-01-30T16:56:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.083997 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.084052 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.084061 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.084073 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.084084 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.186708 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.186746 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.186758 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.186772 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.186783 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.288442 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.288702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.288789 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.288898 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.288956 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.392231 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.392433 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.392524 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.392591 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.392647 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.495436 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.495480 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.495492 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.495507 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.495518 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.598014 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.598060 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.598073 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.598088 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.598097 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.700445 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.700494 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.700510 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.700526 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.700540 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.804575 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.804628 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.804645 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.804667 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.804684 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.819494 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8ab27748-3507-429f-888b-b45b4d17b014\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"\\\\nI0130 16:54:44.564860 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file\\\\nI0130 16:54:44.564709 1 requestheader_controller.go:172] Starting RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564904 1 shared_informer.go:313] Waiting for caches to sync for RequestHeaderAuthRequestController\\\\nI0130 16:54:44.564720 1 configmap_cafile_content.go:205] \\\\\\\"Starting controller\\\\\\\" name=\\\\\\\"client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\\\\"\\\\nI0130 16:54:44.564916 1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file\\\\nI0130 16:54:44.565055 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"WatchListClient\\\\\\\" enabled=false\\\\nI0130 16:54:44.565065 1 envvar.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nI0130 16:54:44.564730 1 tlsconfig.go:203] \\\\\\\"Loaded serving cert\\\\\\\" certName=\\\\\\\"serving-cert::/tmp/serving-cert-2769129766/tls.crt::/tmp/serving-cert-2769129766/tls.key\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"localhost\\\\\\\\\\\\\\\" [serving] validServingFor=[localhost] issuer=\\\\\\\\\\\\\\\"check-endpoints-signer@1769792078\\\\\\\\\\\\\\\" (2026-01-30 16:54:37 +0000 UTC to 2026-03-01 16:54:38 +0000 UTC (now=2026-01-30 16:54:44.533361706 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565458 1 named_certificates.go:53] \\\\\\\"Loaded SNI cert\\\\\\\" index=0 certName=\\\\\\\"self-signed loopback\\\\\\\" certDetail=\\\\\\\"\\\\\\\\\\\\\\\"apiserver-loopback-client@1769792084\\\\\\\\\\\\\\\" [serving] validServingFor=[apiserver-loopback-client] issuer=\\\\\\\\\\\\\\\"apiserver-loopback-client-ca@1769792084\\\\\\\\\\\\\\\" (2026-01-30 15:54:43 +0000 UTC to 2027-01-30 15:54:43 +0000 UTC (now=2026-01-30 16:54:44.565435832 +0000 UTC))\\\\\\\"\\\\nI0130 16:54:44.565474 1 secure_serving.go:213] Serving securely on [::]:17697\\\\nI0130 16:54:44.565499 1 genericapiserver.go:683] [graceful-termination] waiting for shutdown to be initiated\\\\nI0130 16:54:44.565514 1 tlsconfig.go:243] \\\\\\\"Starting DynamicServingCertificateController\\\\\\\"\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:38Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.837724 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.848377 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-k255f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2482315f-1b5d-4a27-a9d9-97f4780c1869\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6e7482e63cefa1e8dfa0adec1061ace4e1a482a646844ba1d77038309e904cf7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh9k5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:47Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-k255f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.860044 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ea67b02c-fc08-4a69-8c7f-c8da661a12ea\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dea2b37d1f833ec9e1eba6b034b7178206d4c32d383bdaf40b270c3f7219de4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://529cdf849b64d965fb2b3276a2e033621e7695668b6b05041603ad93659a1c73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vfg8q\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:56Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4f9lf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.869042 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 12:15:35.454716825 +0000 UTC Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.873821 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9a40d940-4f5a-42b6-80cb-fe98c14066c9\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cd316abcb06f9cb980b110261410e1646a36fe9c70e3384aa128b178272fb6d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://40670c5fb8ecc02e067cbb1ad22ade50ba2c40d03ff8b3b3eac1c0b7f3e1f599\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://369458cf36c7825613a5613214a88605b5a6247cbd2465f7bb924facf4d573a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.891144 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8a4e02635c5d1398f5642a4f78ab59f957e82d7b595048b900f7f57998072b34\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://9ca104420859d47e9447dba6c7629515ac5d94c76d90029f8b02677814c6cf88\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.902601 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"75ff6334-72a0-4748-bba6-0efb493c8033\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fe68da7f075332bd889ca6351f6c58e8a4e2cb0a14e68dd9a882f15c348ec5bc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c5zwb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-dwnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.906170 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.906199 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.906210 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.906225 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.906236 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:13Z","lastTransitionTime":"2026-01-30T16:56:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.913971 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-9vnxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dcd71c7c-942c-4c29-969e-45d946f356c8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:55:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:32Z\\\",\\\"message\\\":\\\"2026-01-30T16:54:46+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c\\\\n2026-01-30T16:54:46+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ba54d85f-38b5-4a96-b6a2-74f61114ba0c to /host/opt/cni/bin/\\\\n2026-01-30T16:54:47Z [verbose] multus-daemon started\\\\n2026-01-30T16:54:47Z [verbose] Readiness Indicator file check\\\\n2026-01-30T16:55:32Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:44Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:55:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8sfdn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-9vnxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.923512 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-2mlzr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1bac1dc0-d552-4864-b805-fc92981ae4c0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d7682a4940bfaf61053fc403898dd912a57e1152b7f646f65d8931bbe5a5aee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gtwpm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-2mlzr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.939470 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-69v8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4777970a-81c4-4412-a06b-641a8343a749\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d50d143c711318276147641c1904b0f9c28209e0ead64b6bc2578d0914e821a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:51Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://359810c74d3f87e0d5e1d999de2c258ae3973dab43515d57d155f128c3ff94b2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27c4349c10695721b7cf02ef9692111df5c7e1d4a979efb27656c1f1d7bf4a50\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:46Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1bb24fa4abde02b85b25ba7d6fcc18bbd583b180c7c685deeb40fb3dc52c9d72\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://65e360c0be7cc41e975f83cb90a7dcf06502320f778e1f1b1e2e2935932c4a5d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:48Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://291cdb5f4ee893c5cddf039271822fc0b83ca83678ee2529d9e7e727aee084cc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f8b0111485e4f6d9d343b3f14f155b43c8e9e95be8e9d10ef69dfd5642a75726\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:50Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wsrqg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-69v8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.950837 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"abacbc6e-6514-4db6-80b5-23570952c86f\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:58Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jnj8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:58Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-lpb6h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.959985 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5ee3a199-4fce-4e8b-bf6d-a8a4a31e6592\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a77d9ecb01962b110c243f6cbe7afa7e35ff46587ae5f521e5c0b7d833fe84e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c9cf1694ebc230620e715e416388ffe9e9224ba48349257de31e4f68c535b99b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c9cf1694ebc230620e715e416388ffe9e9224ba48349257de31e4f68c535b99b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.970815 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0a59aa188d91c356dc946cef975ab964cb48ed5ad8a7b1b1a97bbdf4f6a463f8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.983813 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:13 crc kubenswrapper[4712]: I0130 16:56:13.996508 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:42Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:13Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.008700 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.008740 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.008752 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.008769 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.008784 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.015816 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d4dd639c-2367-490e-801b-9042a158c962\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b1b98b043cb83155f1304d4e3e7afaa43921f5f9bd288e2d50222df3c549a97c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d5aceb9806198f5a718418c0834056dadafaec6a0d93a3d4253f3d18d3e7f6b7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0a5b6fd14c3399f1335ab849702db5ae6ab412a259215225a3d1b45afdf29c21\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7424ed8c0152d8e7a206225cef7124928cbd700835cfa9f9c4e523bfe900f3c3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e49662995d818377576fc281d55d5d944d785d5a0a11d45e86e9f08ec7fa2bba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b8260dff836395317df49f63cce948036b59d5a65b15d311a9aca613ccd31991\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f5d77ab2700f729895bf992763e95ec4a5a7d719450b0691caac9e3d565239cd\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:26Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0d2115368aa9d3b01b8ff03a51c1c6d93df8c01ae147994ae9fa09401e42ff16\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.027509 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f1f355d-892f-40a3-8fae-090e9e69b585\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://14c2d61cc827e2a3a1ff4d597f89532719a52fe5fd353eb2ec9905533cfb8897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0930b443376f024df5c05a35328264b347e4b58fef7c64e9bb50b084c461e19f\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b9e52241ae256ff758f4cf4bdd6a588a6d37cdc5185c372a9942d9d2c9e8db\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:23Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.037344 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e483f7201537775910f8d1a99625c35050e43bebb2947e7c7d076cccd2a3fe1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.056049 4712 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"93651476-fd00-4a9e-934a-73537f1d103e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-30T16:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-30T16:55:36Z\\\",\\\"message\\\":\\\"lector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.712246 6670 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713464 6670 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713557 6670 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.713693 6670 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0130 16:55:36.714040 6670 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0130 16:55:36.755214 6670 shared_informer.go:320] Caches are synced for node-tracker-controller\\\\nI0130 16:55:36.755270 6670 services_controller.go:204] Setting up event handlers for services for network=default\\\\nI0130 16:55:36.755403 6670 ovnkube.go:599] Stopped ovnkube\\\\nI0130 16:55:36.755452 6670 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0130 16:55:36.755594 6670 ovnkube.go:137] failed to run ov\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-30T16:55:35Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-228xs_openshift-ovn-kubernetes(93651476-fd00-4a9e-934a-73537f1d103e)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-30T16:54:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-30T16:54:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-30T16:54:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rxzgm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-30T16:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-228xs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-30T16:56:14Z is after 2025-08-24T17:21:41Z" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.111691 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.111774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.111785 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.111833 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.111846 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.214632 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.214672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.214690 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.214720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.214737 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.316756 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.316829 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.316841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.316859 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.316873 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.419031 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.419076 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.419090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.419106 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.419115 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.521513 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.521557 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.521571 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.521588 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.521602 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.624032 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.624103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.624114 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.624132 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.624144 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.726775 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.726865 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.726884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.726908 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.726924 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.798913 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.798955 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.798943 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.798913 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:14 crc kubenswrapper[4712]: E0130 16:56:14.799055 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:14 crc kubenswrapper[4712]: E0130 16:56:14.799178 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:14 crc kubenswrapper[4712]: E0130 16:56:14.799340 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:14 crc kubenswrapper[4712]: E0130 16:56:14.799401 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.830341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.830389 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.830400 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.830415 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.830426 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.869529 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 21:30:49.688989532 +0000 UTC Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.932929 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.932970 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.932982 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.932996 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:14 crc kubenswrapper[4712]: I0130 16:56:14.933009 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:14Z","lastTransitionTime":"2026-01-30T16:56:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.036026 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.036072 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.036090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.036117 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.036140 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.138594 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.138641 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.138655 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.138672 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.138684 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.241238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.241288 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.241300 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.241317 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.241328 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.343842 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.343884 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.343896 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.343910 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.343920 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.446273 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.446324 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.446337 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.446354 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.446364 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.548643 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.548674 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.548682 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.548720 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.548730 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.651130 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.651402 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.651560 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.651655 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.651744 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.754633 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.754966 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.755056 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.755153 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.755244 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.858671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.858707 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.858716 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.858730 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.858739 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.870578 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 09:36:10.56396294 +0000 UTC Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.960789 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.960849 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.960858 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.960871 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:15 crc kubenswrapper[4712]: I0130 16:56:15.960879 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:15Z","lastTransitionTime":"2026-01-30T16:56:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.062964 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.063230 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.063299 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.063406 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.063479 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.166298 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.166348 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.166358 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.166374 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.166386 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.268588 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.268637 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.268646 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.268664 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.268675 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.370960 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.371038 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.371069 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.371111 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.371128 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.473465 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.473508 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.473520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.473538 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.473552 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.575437 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.575476 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.575487 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.575503 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.575515 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.677066 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.677134 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.677145 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.677162 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.677173 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.780332 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.780369 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.780382 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.780399 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.780410 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.799059 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.799090 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.799144 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:16 crc kubenswrapper[4712]: E0130 16:56:16.799238 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.799289 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:16 crc kubenswrapper[4712]: E0130 16:56:16.799459 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:16 crc kubenswrapper[4712]: E0130 16:56:16.799550 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:16 crc kubenswrapper[4712]: E0130 16:56:16.799664 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.871855 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:34:08.401928509 +0000 UTC Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.883036 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.883090 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.883111 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.883137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.883155 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.985507 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.985855 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.985990 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.986116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:16 crc kubenswrapper[4712]: I0130 16:56:16.986212 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:16Z","lastTransitionTime":"2026-01-30T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.090647 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.090988 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.091099 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.091235 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.091476 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.193562 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.193609 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.193621 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.193639 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.193651 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.296028 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.296085 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.296095 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.296112 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.296126 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.399318 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.399583 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.399674 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.399851 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.399939 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.503642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.503679 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.503688 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.503702 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.503713 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.605532 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.605589 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.605598 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.605616 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.605625 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.707597 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.707622 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.707630 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.707642 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.707651 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.809320 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.809368 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.809379 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.809396 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.809412 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.873044 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 07:55:46.144014564 +0000 UTC Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.912471 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.912534 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.912551 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.912585 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:17 crc kubenswrapper[4712]: I0130 16:56:17.912605 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:17Z","lastTransitionTime":"2026-01-30T16:56:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.015055 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.015108 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.015121 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.015137 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.015150 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.117745 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.117784 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.117818 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.117834 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.117844 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.220195 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.220233 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.220243 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.220261 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.220272 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.322926 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.322983 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.322999 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.323019 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.323035 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.425830 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.425872 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.425882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.425896 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.425906 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.451115 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/1.log" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.451702 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/0.log" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.451841 4712 generic.go:334] "Generic (PLEG): container finished" podID="dcd71c7c-942c-4c29-969e-45d946f356c8" containerID="383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00" exitCode=1 Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.451989 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerDied","Data":"383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.452044 4712 scope.go:117] "RemoveContainer" containerID="93d46d2db74d4f7db1054e9831ab25bbc64a9158daf0638e6b21fc75ba0341b4" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.452615 4712 scope.go:117] "RemoveContainer" containerID="383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00" Jan 30 16:56:18 crc kubenswrapper[4712]: E0130 16:56:18.452950 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-9vnxv_openshift-multus(dcd71c7c-942c-4c29-969e-45d946f356c8)\"" pod="openshift-multus/multus-9vnxv" podUID="dcd71c7c-942c-4c29-969e-45d946f356c8" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.490834 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=94.490815753 podStartE2EDuration="1m34.490815753s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.476170808 +0000 UTC m=+115.383180317" watchObservedRunningTime="2026-01-30 16:56:18.490815753 +0000 UTC m=+115.397825222" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.502559 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-k255f" podStartSLOduration=95.502541125 podStartE2EDuration="1m35.502541125s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.50220773 +0000 UTC m=+115.409217189" watchObservedRunningTime="2026-01-30 16:56:18.502541125 +0000 UTC m=+115.409550594" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.527786 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.527836 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.527845 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.527862 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.527871 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.531421 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4f9lf" podStartSLOduration=94.531400157 podStartE2EDuration="1m34.531400157s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.518387047 +0000 UTC m=+115.425396516" watchObservedRunningTime="2026-01-30 16:56:18.531400157 +0000 UTC m=+115.438409626" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.531745 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=64.531740733 podStartE2EDuration="1m4.531740733s" podCreationTimestamp="2026-01-30 16:55:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.531144984 +0000 UTC m=+115.438154453" watchObservedRunningTime="2026-01-30 16:56:18.531740733 +0000 UTC m=+115.438750202" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.554963 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podStartSLOduration=94.554943882 podStartE2EDuration="1m34.554943882s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.554945412 +0000 UTC m=+115.461954881" watchObservedRunningTime="2026-01-30 16:56:18.554943882 +0000 UTC m=+115.461953361" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.580716 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=23.58069883 podStartE2EDuration="23.58069883s" podCreationTimestamp="2026-01-30 16:55:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.580421935 +0000 UTC m=+115.487431404" watchObservedRunningTime="2026-01-30 16:56:18.58069883 +0000 UTC m=+115.487708299" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.631262 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.631295 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.631307 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.631321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.631334 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.646464 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-2mlzr" podStartSLOduration=95.646440273 podStartE2EDuration="1m35.646440273s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.631349132 +0000 UTC m=+115.538358611" watchObservedRunningTime="2026-01-30 16:56:18.646440273 +0000 UTC m=+115.553449742" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.646576 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-69v8h" podStartSLOduration=94.646571505 podStartE2EDuration="1m34.646571505s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.646371002 +0000 UTC m=+115.553380471" watchObservedRunningTime="2026-01-30 16:56:18.646571505 +0000 UTC m=+115.553580984" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.693057 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=93.693037445 podStartE2EDuration="1m33.693037445s" podCreationTimestamp="2026-01-30 16:54:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.691606775 +0000 UTC m=+115.598616244" watchObservedRunningTime="2026-01-30 16:56:18.693037445 +0000 UTC m=+115.600046924" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.716679 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=94.716661322 podStartE2EDuration="1m34.716661322s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:18.705334846 +0000 UTC m=+115.612344315" watchObservedRunningTime="2026-01-30 16:56:18.716661322 +0000 UTC m=+115.623670791" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.733324 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.733371 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.733380 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.733393 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.733402 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.799569 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.799584 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.799919 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:18 crc kubenswrapper[4712]: E0130 16:56:18.800004 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:18 crc kubenswrapper[4712]: E0130 16:56:18.800058 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:18 crc kubenswrapper[4712]: E0130 16:56:18.800102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.800311 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.800684 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:18 crc kubenswrapper[4712]: E0130 16:56:18.800769 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.834954 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.834992 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.835002 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.835017 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.835029 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.873966 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 08:15:00.275657353 +0000 UTC Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.937335 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.937357 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.937366 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.937378 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:18 crc kubenswrapper[4712]: I0130 16:56:18.937442 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:18Z","lastTransitionTime":"2026-01-30T16:56:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.044156 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.044232 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.044242 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.044260 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.044270 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:19Z","lastTransitionTime":"2026-01-30T16:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.146609 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.146665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.146678 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.146695 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.146705 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:19Z","lastTransitionTime":"2026-01-30T16:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.248827 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.248870 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.248882 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.248899 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.248912 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:19Z","lastTransitionTime":"2026-01-30T16:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.351497 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.351539 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.351552 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.351568 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.351581 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:19Z","lastTransitionTime":"2026-01-30T16:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.454671 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.454716 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.454727 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.454746 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.454757 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:19Z","lastTransitionTime":"2026-01-30T16:56:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.455990 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/1.log" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.457908 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/3.log" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.460266 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerStarted","Data":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.460709 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.490325 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podStartSLOduration=95.490307144 podStartE2EDuration="1m35.490307144s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:19.487211469 +0000 UTC m=+116.394220938" watchObservedRunningTime="2026-01-30 16:56:19.490307144 +0000 UTC m=+116.397316633" Jan 30 16:56:19 crc kubenswrapper[4712]: I0130 16:56:19.874292 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 04:03:52.308234834 +0000 UTC Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.199428 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.199482 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.199494 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.199510 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.199521 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.208173 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lpb6h"] Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.208338 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:20 crc kubenswrapper[4712]: E0130 16:56:20.208448 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.302175 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.302215 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.302238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.302253 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.302263 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.404404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.404447 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.404457 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.404470 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.404480 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.507160 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.507568 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.507579 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.507595 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.507606 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.610289 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.610341 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.610352 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.610367 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.610378 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.712424 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.712458 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.712469 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.712483 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.712493 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.798849 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.798887 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.798923 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:20 crc kubenswrapper[4712]: E0130 16:56:20.798989 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:20 crc kubenswrapper[4712]: E0130 16:56:20.799151 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:20 crc kubenswrapper[4712]: E0130 16:56:20.799207 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.815440 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.815506 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.815520 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.815543 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.815558 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.874632 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:24:42.706209352 +0000 UTC Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.918238 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.918270 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.918283 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.918303 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:20 crc kubenswrapper[4712]: I0130 16:56:20.918317 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:20Z","lastTransitionTime":"2026-01-30T16:56:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.021049 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.021116 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.021146 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.021161 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.021171 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.123963 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.124012 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.124022 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.124039 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.124050 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.226701 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.226783 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.226841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.226874 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.226899 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.329607 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.329689 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.329711 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.329737 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.329754 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.432318 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.432362 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.432373 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.432387 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.432397 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.534682 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.534767 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.534784 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.534838 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.534850 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.637359 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.637394 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.637404 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.637419 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.637428 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.739682 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.739751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.739774 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.739841 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.739867 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.798628 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:21 crc kubenswrapper[4712]: E0130 16:56:21.798818 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.842393 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.842438 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.842450 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.842470 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.842483 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.875186 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 11:54:58.24290163 +0000 UTC Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.945321 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.945367 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.945381 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.945397 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:21 crc kubenswrapper[4712]: I0130 16:56:21.945410 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:21Z","lastTransitionTime":"2026-01-30T16:56:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.048751 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.048866 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.048892 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.048921 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.048943 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.152033 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.152079 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.152088 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.152103 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.152114 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.254428 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.254477 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.254493 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.254513 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.254529 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.357141 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.357189 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.357202 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.357220 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.357233 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.460070 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.460102 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.460112 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.460127 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.460138 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.562412 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.562462 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.562475 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.562494 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.562506 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.571974 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.572012 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.572021 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.572035 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.572045 4712 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-30T16:56:22Z","lastTransitionTime":"2026-01-30T16:56:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.612643 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk"] Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.612969 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.617158 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.617371 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.617536 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.618254 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.715694 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bdf329a-af7f-4572-b4b7-da0a666115db-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.715859 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf329a-af7f-4572-b4b7-da0a666115db-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.715891 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bdf329a-af7f-4572-b4b7-da0a666115db-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.715926 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bdf329a-af7f-4572-b4b7-da0a666115db-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.716042 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bdf329a-af7f-4572-b4b7-da0a666115db-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.798851 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.798931 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:22 crc kubenswrapper[4712]: E0130 16:56:22.798988 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.798853 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:22 crc kubenswrapper[4712]: E0130 16:56:22.799079 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:22 crc kubenswrapper[4712]: E0130 16:56:22.799186 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817508 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bdf329a-af7f-4572-b4b7-da0a666115db-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817590 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bdf329a-af7f-4572-b4b7-da0a666115db-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817642 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bdf329a-af7f-4572-b4b7-da0a666115db-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817668 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf329a-af7f-4572-b4b7-da0a666115db-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817660 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/8bdf329a-af7f-4572-b4b7-da0a666115db-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817694 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bdf329a-af7f-4572-b4b7-da0a666115db-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.817845 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/8bdf329a-af7f-4572-b4b7-da0a666115db-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.818604 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8bdf329a-af7f-4572-b4b7-da0a666115db-service-ca\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.827368 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8bdf329a-af7f-4572-b4b7-da0a666115db-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.839187 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8bdf329a-af7f-4572-b4b7-da0a666115db-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-sr2qk\" (UID: \"8bdf329a-af7f-4572-b4b7-da0a666115db\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.876307 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 08:19:23.187945947 +0000 UTC Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.876377 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.886110 4712 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 30 16:56:22 crc kubenswrapper[4712]: I0130 16:56:22.927211 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" Jan 30 16:56:22 crc kubenswrapper[4712]: W0130 16:56:22.946297 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8bdf329a_af7f_4572_b4b7_da0a666115db.slice/crio-e0dabc989f5fbf04c8b7c9153240edd201a5b24ad7c30d6455588af6a9cc3146 WatchSource:0}: Error finding container e0dabc989f5fbf04c8b7c9153240edd201a5b24ad7c30d6455588af6a9cc3146: Status 404 returned error can't find the container with id e0dabc989f5fbf04c8b7c9153240edd201a5b24ad7c30d6455588af6a9cc3146 Jan 30 16:56:23 crc kubenswrapper[4712]: I0130 16:56:23.476256 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" event={"ID":"8bdf329a-af7f-4572-b4b7-da0a666115db","Type":"ContainerStarted","Data":"48a4c6fdf10db2b3621a21c2fb08335a57376081d05792d6f8d40d9a8d1fbd88"} Jan 30 16:56:23 crc kubenswrapper[4712]: I0130 16:56:23.476324 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" event={"ID":"8bdf329a-af7f-4572-b4b7-da0a666115db","Type":"ContainerStarted","Data":"e0dabc989f5fbf04c8b7c9153240edd201a5b24ad7c30d6455588af6a9cc3146"} Jan 30 16:56:23 crc kubenswrapper[4712]: I0130 16:56:23.494048 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-sr2qk" podStartSLOduration=99.494020885 podStartE2EDuration="1m39.494020885s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:23.493901273 +0000 UTC m=+120.400910792" watchObservedRunningTime="2026-01-30 16:56:23.494020885 +0000 UTC m=+120.401030354" Jan 30 16:56:23 crc kubenswrapper[4712]: E0130 16:56:23.796830 4712 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 30 16:56:23 crc kubenswrapper[4712]: I0130 16:56:23.800073 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:23 crc kubenswrapper[4712]: E0130 16:56:23.802495 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:23 crc kubenswrapper[4712]: E0130 16:56:23.880130 4712 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:56:24 crc kubenswrapper[4712]: I0130 16:56:24.798644 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:24 crc kubenswrapper[4712]: I0130 16:56:24.799054 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:24 crc kubenswrapper[4712]: I0130 16:56:24.799009 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:24 crc kubenswrapper[4712]: E0130 16:56:24.799168 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:24 crc kubenswrapper[4712]: E0130 16:56:24.799275 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:24 crc kubenswrapper[4712]: E0130 16:56:24.799374 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:25 crc kubenswrapper[4712]: I0130 16:56:25.799285 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:25 crc kubenswrapper[4712]: E0130 16:56:25.799538 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:26 crc kubenswrapper[4712]: I0130 16:56:26.798909 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:26 crc kubenswrapper[4712]: I0130 16:56:26.798909 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:26 crc kubenswrapper[4712]: E0130 16:56:26.799050 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:26 crc kubenswrapper[4712]: E0130 16:56:26.799143 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:26 crc kubenswrapper[4712]: I0130 16:56:26.798909 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:26 crc kubenswrapper[4712]: E0130 16:56:26.799281 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:27 crc kubenswrapper[4712]: I0130 16:56:27.799297 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:27 crc kubenswrapper[4712]: E0130 16:56:27.799431 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:28 crc kubenswrapper[4712]: I0130 16:56:28.799587 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:28 crc kubenswrapper[4712]: I0130 16:56:28.799613 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:28 crc kubenswrapper[4712]: E0130 16:56:28.799729 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:28 crc kubenswrapper[4712]: I0130 16:56:28.799786 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:28 crc kubenswrapper[4712]: E0130 16:56:28.799949 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:28 crc kubenswrapper[4712]: E0130 16:56:28.800070 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:28 crc kubenswrapper[4712]: E0130 16:56:28.881876 4712 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 30 16:56:29 crc kubenswrapper[4712]: I0130 16:56:29.799041 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:29 crc kubenswrapper[4712]: E0130 16:56:29.799270 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:30 crc kubenswrapper[4712]: I0130 16:56:30.798910 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:30 crc kubenswrapper[4712]: I0130 16:56:30.798968 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:30 crc kubenswrapper[4712]: E0130 16:56:30.799036 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:30 crc kubenswrapper[4712]: I0130 16:56:30.799187 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:30 crc kubenswrapper[4712]: I0130 16:56:30.799451 4712 scope.go:117] "RemoveContainer" containerID="383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00" Jan 30 16:56:30 crc kubenswrapper[4712]: E0130 16:56:30.799745 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:30 crc kubenswrapper[4712]: E0130 16:56:30.800063 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:31 crc kubenswrapper[4712]: I0130 16:56:31.502178 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/1.log" Jan 30 16:56:31 crc kubenswrapper[4712]: I0130 16:56:31.502237 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerStarted","Data":"58d9e6895e721bf0d4cfb7b391d4273fbf98d44cab746f53b51c0dab20ad4c4b"} Jan 30 16:56:31 crc kubenswrapper[4712]: I0130 16:56:31.521348 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-9vnxv" podStartSLOduration=107.521327966 podStartE2EDuration="1m47.521327966s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:31.520327881 +0000 UTC m=+128.427337370" watchObservedRunningTime="2026-01-30 16:56:31.521327966 +0000 UTC m=+128.428337455" Jan 30 16:56:31 crc kubenswrapper[4712]: I0130 16:56:31.799612 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:31 crc kubenswrapper[4712]: E0130 16:56:31.799751 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:32 crc kubenswrapper[4712]: I0130 16:56:32.799006 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:32 crc kubenswrapper[4712]: I0130 16:56:32.799094 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:32 crc kubenswrapper[4712]: E0130 16:56:32.799170 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 30 16:56:32 crc kubenswrapper[4712]: E0130 16:56:32.799296 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 30 16:56:32 crc kubenswrapper[4712]: I0130 16:56:32.799033 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:32 crc kubenswrapper[4712]: E0130 16:56:32.799453 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 30 16:56:33 crc kubenswrapper[4712]: I0130 16:56:33.799230 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:33 crc kubenswrapper[4712]: E0130 16:56:33.800884 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-lpb6h" podUID="abacbc6e-6514-4db6-80b5-23570952c86f" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.799287 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.799369 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.799381 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.803857 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.804383 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.804582 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:56:34 crc kubenswrapper[4712]: I0130 16:56:34.805294 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:56:35 crc kubenswrapper[4712]: I0130 16:56:35.800142 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:56:35 crc kubenswrapper[4712]: I0130 16:56:35.803043 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:56:35 crc kubenswrapper[4712]: I0130 16:56:35.804340 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.020665 4712 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.085062 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5xwgj"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.085678 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t6xlq"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.086058 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.086564 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.087031 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.087725 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.090337 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.090467 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.091632 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m96vb"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.092124 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.094118 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.094158 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.095544 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.095791 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.095955 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.096658 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.096934 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.097598 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.097854 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.097961 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.099525 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.100367 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.100528 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.100529 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.100813 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.101995 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gzvld"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.102477 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.106884 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.107964 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.111024 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t468b"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.111528 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.112080 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.112555 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.114733 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.115086 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.115483 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-jx2s9"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.115863 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.115868 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.116281 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.116714 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.116868 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.117245 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.117385 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.117945 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.118116 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.118314 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.118480 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.118421 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.120701 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-59crs"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.121531 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.128567 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-glzbp"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.129150 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.129292 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.129901 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.131570 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.132191 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134444 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134706 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqlzw\" (UniqueName: \"kubernetes.io/projected/b23672ef-c640-4ba4-9303-26955cec21d6-kube-api-access-nqlzw\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134764 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-encryption-config\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134814 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-serving-cert\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134849 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdvf\" (UniqueName: \"kubernetes.io/projected/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-kube-api-access-5qdvf\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134870 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134917 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrr6t\" (UniqueName: \"kubernetes.io/projected/0b4d1852-9507-412e-842e-d9dbd886e79d-kube-api-access-xrr6t\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134947 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vbnp\" (UniqueName: \"kubernetes.io/projected/29e89539-b787-4a7e-a75a-9dd9216b3649-kube-api-access-7vbnp\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.134987 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-oauth-config\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135007 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-trusted-ca-bundle\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135027 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-serving-cert\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135075 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-service-ca\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135099 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49e70831-c29b-4e74-bdda-aa83c22c6527-machine-approver-tls\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135140 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135161 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-config\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135183 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49e70831-c29b-4e74-bdda-aa83c22c6527-auth-proxy-config\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135225 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whmhl\" (UniqueName: \"kubernetes.io/projected/49e70831-c29b-4e74-bdda-aa83c22c6527-kube-api-access-whmhl\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135247 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r4b4\" (UniqueName: \"kubernetes.io/projected/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-kube-api-access-6r4b4\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135287 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0b4d1852-9507-412e-842e-d9dbd886e79d-images\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135313 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-config\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135335 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkknm\" (UniqueName: \"kubernetes.io/projected/43a0a350-8151-4bcd-8d1e-1c534e291152-kube-api-access-hkknm\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135378 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135403 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135424 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-audit-dir\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135469 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b4d1852-9507-412e-842e-d9dbd886e79d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135489 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-config\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135530 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-client\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135551 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23672ef-c640-4ba4-9303-26955cec21d6-audit-dir\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135571 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b4d1852-9507-412e-842e-d9dbd886e79d-config\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135609 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-oauth-serving-cert\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135630 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-audit-policies\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135654 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135696 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135718 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135740 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-config\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135780 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135824 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdjzd\" (UniqueName: \"kubernetes.io/projected/7a35d8b7-3b76-473d-b380-9db623f234f2-kube-api-access-zdjzd\") pod \"cluster-samples-operator-665b6dd947-8gsmk\" (UID: \"7a35d8b7-3b76-473d-b380-9db623f234f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135847 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135871 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135911 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135934 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e89539-b787-4a7e-a75a-9dd9216b3649-serving-cert\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135973 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx8g8\" (UniqueName: \"kubernetes.io/projected/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-kube-api-access-wx8g8\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.135996 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghsjc\" (UniqueName: \"kubernetes.io/projected/b34e60ff-e00e-485a-b7e0-1dded6c68091-kube-api-access-ghsjc\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136017 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-console-config\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136067 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a35d8b7-3b76-473d-b380-9db623f234f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8gsmk\" (UID: \"7a35d8b7-3b76-473d-b380-9db623f234f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136090 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-etcd-client\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136129 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136153 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-service-ca-bundle\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136174 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136215 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e70831-c29b-4e74-bdda-aa83c22c6527-config\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136236 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-service-ca\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136290 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-client-ca\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136311 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136332 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/412dac4c-e4f0-4678-a113-9a241c6a9723-serving-cert\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136378 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zfw\" (UniqueName: \"kubernetes.io/projected/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-kube-api-access-r8zfw\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136401 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-client-ca\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136424 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136466 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-serving-cert\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136484 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-serving-cert\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136502 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e60ff-e00e-485a-b7e0-1dded6c68091-serving-cert\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136548 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136571 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136613 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-config\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136634 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-trusted-ca\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136655 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqw4n\" (UniqueName: \"kubernetes.io/projected/412dac4c-e4f0-4678-a113-9a241c6a9723-kube-api-access-qqw4n\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136697 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-ca\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.136718 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-audit-policies\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.149590 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.150645 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.156681 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.158937 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.159473 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.159486 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.159986 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.163958 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.184906 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.163964 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.188291 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.188691 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.189147 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.164278 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.164363 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.164395 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.182083 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.185243 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.195465 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202345 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202480 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202749 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202860 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202894 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202983 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203080 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203166 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203228 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203295 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203371 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203434 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203504 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203547 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203607 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.202246 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203771 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203862 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203950 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.204020 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203703 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.204173 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.204254 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.203727 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.204141 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.204346 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.204546 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.207087 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.207225 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.208601 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-27wq6"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.209029 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.222975 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223136 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223179 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223355 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223385 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223633 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223737 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.223854 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.224020 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.224160 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.224406 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.233381 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.234685 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.234828 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.237317 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrr6t\" (UniqueName: \"kubernetes.io/projected/0b4d1852-9507-412e-842e-d9dbd886e79d-kube-api-access-xrr6t\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.237379 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vbnp\" (UniqueName: \"kubernetes.io/projected/29e89539-b787-4a7e-a75a-9dd9216b3649-kube-api-access-7vbnp\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.237406 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-oauth-config\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.237445 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-56p67"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.238056 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.238419 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.237452 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-trusted-ca-bundle\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239393 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-serving-cert\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239441 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239467 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-service-ca\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239495 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49e70831-c29b-4e74-bdda-aa83c22c6527-machine-approver-tls\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239520 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-config\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239546 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49e70831-c29b-4e74-bdda-aa83c22c6527-auth-proxy-config\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239564 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whmhl\" (UniqueName: \"kubernetes.io/projected/49e70831-c29b-4e74-bdda-aa83c22c6527-kube-api-access-whmhl\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239585 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r4b4\" (UniqueName: \"kubernetes.io/projected/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-kube-api-access-6r4b4\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239607 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0b4d1852-9507-412e-842e-d9dbd886e79d-images\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239626 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/587ef6e0-541f-4139-be7d-d6d4a9e8244b-metrics-tls\") pod \"dns-operator-744455d44c-glzbp\" (UID: \"587ef6e0-541f-4139-be7d-d6d4a9e8244b\") " pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239650 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-config\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239641 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.238741 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239668 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkknm\" (UniqueName: \"kubernetes.io/projected/43a0a350-8151-4bcd-8d1e-1c534e291152-kube-api-access-hkknm\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239933 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.239974 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240011 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240031 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-audit-dir\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240052 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b4d1852-9507-412e-842e-d9dbd886e79d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240070 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-config\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240092 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxjjv\" (UniqueName: \"kubernetes.io/projected/587ef6e0-541f-4139-be7d-d6d4a9e8244b-kube-api-access-qxjjv\") pod \"dns-operator-744455d44c-glzbp\" (UID: \"587ef6e0-541f-4139-be7d-d6d4a9e8244b\") " pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240114 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-client\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240136 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23672ef-c640-4ba4-9303-26955cec21d6-audit-dir\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240156 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e443a952-c1e3-42b2-8a58-f29416ff11dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240178 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b4d1852-9507-412e-842e-d9dbd886e79d-config\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240195 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-oauth-serving-cert\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240214 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-audit-policies\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240231 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240247 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240273 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v4d6\" (UniqueName: \"kubernetes.io/projected/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-kube-api-access-4v4d6\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240300 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240318 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240335 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-config\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240355 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a5836457-3db5-41ec-b036-057186d44de8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240380 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240399 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdjzd\" (UniqueName: \"kubernetes.io/projected/7a35d8b7-3b76-473d-b380-9db623f234f2-kube-api-access-zdjzd\") pod \"cluster-samples-operator-665b6dd947-8gsmk\" (UID: \"7a35d8b7-3b76-473d-b380-9db623f234f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240416 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvsw8\" (UniqueName: \"kubernetes.io/projected/a5836457-3db5-41ec-b036-057186d44de8-kube-api-access-jvsw8\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240436 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240463 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240482 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240503 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240523 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e89539-b787-4a7e-a75a-9dd9216b3649-serving-cert\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240559 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wx8g8\" (UniqueName: \"kubernetes.io/projected/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-kube-api-access-wx8g8\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240577 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghsjc\" (UniqueName: \"kubernetes.io/projected/b34e60ff-e00e-485a-b7e0-1dded6c68091-kube-api-access-ghsjc\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240596 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-console-config\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240617 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a35d8b7-3b76-473d-b380-9db623f234f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8gsmk\" (UID: \"7a35d8b7-3b76-473d-b380-9db623f234f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240639 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-etcd-client\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240662 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240682 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-service-ca-bundle\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240685 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/0b4d1852-9507-412e-842e-d9dbd886e79d-images\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240700 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240719 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75jcb\" (UniqueName: \"kubernetes.io/projected/48626025-5e2a-47c8-b317-bcbada105e87-kube-api-access-75jcb\") pod \"downloads-7954f5f757-27wq6\" (UID: \"48626025-5e2a-47c8-b317-bcbada105e87\") " pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.240723 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/49e70831-c29b-4e74-bdda-aa83c22c6527-auth-proxy-config\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.238839 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.238883 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.238919 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.242469 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-config\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.244097 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-audit-dir\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.245918 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-config\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.246983 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.253518 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.257093 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.259073 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7a35d8b7-3b76-473d-b380-9db623f234f2-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8gsmk\" (UID: \"7a35d8b7-3b76-473d-b380-9db623f234f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.259956 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.260937 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-config\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.261286 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-config\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.261405 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-service-ca-bundle\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.261681 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-console-config\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.261929 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262138 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-trusted-ca-bundle\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262348 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262348 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23672ef-c640-4ba4-9303-26955cec21d6-audit-dir\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262670 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-service-ca\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262731 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e70831-c29b-4e74-bdda-aa83c22c6527-config\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262758 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-service-ca\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262788 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5836457-3db5-41ec-b036-057186d44de8-serving-cert\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262835 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8zfw\" (UniqueName: \"kubernetes.io/projected/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-kube-api-access-r8zfw\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262858 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-client-ca\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.262882 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.266790 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.267540 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.271280 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddc2j"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.272028 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.272326 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t6xlq"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.272467 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.273222 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.273761 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.274144 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.284757 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.286108 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.292082 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.287171 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.287357 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/412dac4c-e4f0-4678-a113-9a241c6a9723-serving-cert\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.292783 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-client-ca\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.292822 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.292938 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5sr\" (UniqueName: \"kubernetes.io/projected/e443a952-c1e3-42b2-8a58-f29416ff11dd-kube-api-access-sr5sr\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.292960 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-serving-cert\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.292980 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-serving-cert\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.293001 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e60ff-e00e-485a-b7e0-1dded6c68091-serving-cert\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.295306 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.296147 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-oauth-serving-cert\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.302564 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-client\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.302914 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29e89539-b787-4a7e-a75a-9dd9216b3649-serving-cert\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.303093 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.303298 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.303508 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.303626 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.303725 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-config\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.304506 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-client-ca\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.308613 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-audit-policies\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.309385 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-client-ca\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.309471 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-trusted-ca\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.310008 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.310439 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.311745 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqw4n\" (UniqueName: \"kubernetes.io/projected/412dac4c-e4f0-4678-a113-9a241c6a9723-kube-api-access-qqw4n\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.311809 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-audit-policies\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.311907 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-ca\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.327505 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.327932 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.328006 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/0b4d1852-9507-412e-842e-d9dbd886e79d-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.304970 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.328600 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.328928 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.332665 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.333162 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.334691 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.334783 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-service-ca\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.335099 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/49e70831-c29b-4e74-bdda-aa83c22c6527-config\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.335500 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b4d1852-9507-412e-842e-d9dbd886e79d-config\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.335913 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-config\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.314147 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqlzw\" (UniqueName: \"kubernetes.io/projected/b23672ef-c640-4ba4-9303-26955cec21d6-kube-api-access-nqlzw\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.336996 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.337001 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-config\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.337486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/412dac4c-e4f0-4678-a113-9a241c6a9723-etcd-ca\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.338367 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/49e70831-c29b-4e74-bdda-aa83c22c6527-machine-approver-tls\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.338368 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-serving-cert\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.338585 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.338772 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.338867 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.339179 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-audit-policies\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.339556 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-etcd-client\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.339699 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-trusted-ca\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.339735 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.339817 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-encryption-config\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.340348 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.340458 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29e89539-b787-4a7e-a75a-9dd9216b3649-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.341309 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e60ff-e00e-485a-b7e0-1dded6c68091-serving-cert\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.341593 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-serving-cert\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.341920 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-oauth-config\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.341977 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/412dac4c-e4f0-4678-a113-9a241c6a9723-serving-cert\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.342194 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.342588 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-serving-cert\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.343300 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-serving-cert\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.344954 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e443a952-c1e3-42b2-8a58-f29416ff11dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.345048 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdvf\" (UniqueName: \"kubernetes.io/projected/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-kube-api-access-5qdvf\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.345105 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.345558 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.345627 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qncbs"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.346036 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.348097 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.348977 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.349999 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.350006 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-serving-cert\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.350181 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.350782 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-encryption-config\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.357817 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.358242 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.358667 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.359120 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.359441 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.359657 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.359876 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.359895 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.360188 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t468b"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.360261 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.360479 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.360609 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.360722 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.360872 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.361164 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.361748 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.365249 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g4p8m"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.365843 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m96vb"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.365911 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.366693 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gzvld"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.371363 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.371914 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.373663 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.373910 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2b574"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.374543 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.374951 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.375160 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.376043 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.377388 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.377754 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2t5z"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.378214 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.379221 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.380041 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.381311 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5xwgj"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.382129 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jx2s9"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.383537 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.385180 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.386338 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-glzbp"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.388410 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h5tl"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.389082 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.389734 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.393582 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.397615 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.398952 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.400346 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.402201 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-56p67"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.403555 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.405841 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.406646 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.408637 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.410966 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddc2j"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.420463 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-ptv9c"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.421407 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8fqqs"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.423525 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.425895 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.426055 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.427373 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.429013 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.429533 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.432522 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-59crs"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.436068 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.437639 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.439321 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.441128 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2b574"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.443069 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g4p8m"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.444396 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.446063 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-27wq6"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.447839 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.448988 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.449922 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e443a952-c1e3-42b2-8a58-f29416ff11dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450073 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/587ef6e0-541f-4139-be7d-d6d4a9e8244b-metrics-tls\") pod \"dns-operator-744455d44c-glzbp\" (UID: \"587ef6e0-541f-4139-be7d-d6d4a9e8244b\") " pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450122 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450152 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxjjv\" (UniqueName: \"kubernetes.io/projected/587ef6e0-541f-4139-be7d-d6d4a9e8244b-kube-api-access-qxjjv\") pod \"dns-operator-744455d44c-glzbp\" (UID: \"587ef6e0-541f-4139-be7d-d6d4a9e8244b\") " pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450178 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e443a952-c1e3-42b2-8a58-f29416ff11dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450203 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450229 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v4d6\" (UniqueName: \"kubernetes.io/projected/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-kube-api-access-4v4d6\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450255 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a5836457-3db5-41ec-b036-057186d44de8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450293 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvsw8\" (UniqueName: \"kubernetes.io/projected/a5836457-3db5-41ec-b036-057186d44de8-kube-api-access-jvsw8\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450320 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450377 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75jcb\" (UniqueName: \"kubernetes.io/projected/48626025-5e2a-47c8-b317-bcbada105e87-kube-api-access-75jcb\") pod \"downloads-7954f5f757-27wq6\" (UID: \"48626025-5e2a-47c8-b317-bcbada105e87\") " pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450401 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5836457-3db5-41ec-b036-057186d44de8-serving-cert\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450455 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5sr\" (UniqueName: \"kubernetes.io/projected/e443a952-c1e3-42b2-8a58-f29416ff11dd-kube-api-access-sr5sr\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.450871 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e443a952-c1e3-42b2-8a58-f29416ff11dd-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.451200 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/a5836457-3db5-41ec-b036-057186d44de8-available-featuregates\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.451770 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.451841 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.452528 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.453609 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e443a952-c1e3-42b2-8a58-f29416ff11dd-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.454123 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.454479 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-99tzw"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.455202 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.455452 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/587ef6e0-541f-4139-be7d-d6d4a9e8244b-metrics-tls\") pod \"dns-operator-744455d44c-glzbp\" (UID: \"587ef6e0-541f-4139-be7d-d6d4a9e8244b\") " pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.456247 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a5836457-3db5-41ec-b036-057186d44de8-serving-cert\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.457147 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.458452 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.459920 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2t5z"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.461291 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h5tl"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.462423 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-99tzw"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.463536 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8fqqs"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.483769 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrr6t\" (UniqueName: \"kubernetes.io/projected/0b4d1852-9507-412e-842e-d9dbd886e79d-kube-api-access-xrr6t\") pod \"machine-api-operator-5694c8668f-5xwgj\" (UID: \"0b4d1852-9507-412e-842e-d9dbd886e79d\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.505183 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vbnp\" (UniqueName: \"kubernetes.io/projected/29e89539-b787-4a7e-a75a-9dd9216b3649-kube-api-access-7vbnp\") pod \"authentication-operator-69f744f599-gzvld\" (UID: \"29e89539-b787-4a7e-a75a-9dd9216b3649\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.522662 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkknm\" (UniqueName: \"kubernetes.io/projected/43a0a350-8151-4bcd-8d1e-1c534e291152-kube-api-access-hkknm\") pod \"console-f9d7485db-jx2s9\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.544191 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whmhl\" (UniqueName: \"kubernetes.io/projected/49e70831-c29b-4e74-bdda-aa83c22c6527-kube-api-access-whmhl\") pod \"machine-approver-56656f9798-dktxv\" (UID: \"49e70831-c29b-4e74-bdda-aa83c22c6527\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.547016 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.562211 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r4b4\" (UniqueName: \"kubernetes.io/projected/2eaa69c4-271a-48de-a917-4ab79dcb2ae4-kube-api-access-6r4b4\") pod \"openshift-apiserver-operator-796bbdcf4f-4wk6n\" (UID: \"2eaa69c4-271a-48de-a917-4ab79dcb2ae4\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.583499 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdjzd\" (UniqueName: \"kubernetes.io/projected/7a35d8b7-3b76-473d-b380-9db623f234f2-kube-api-access-zdjzd\") pod \"cluster-samples-operator-665b6dd947-8gsmk\" (UID: \"7a35d8b7-3b76-473d-b380-9db623f234f2\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.627530 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wx8g8\" (UniqueName: \"kubernetes.io/projected/28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24-kube-api-access-wx8g8\") pod \"apiserver-7bbb656c7d-r5sv7\" (UID: \"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.666903 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.666912 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.671138 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.671155 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.671699 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghsjc\" (UniqueName: \"kubernetes.io/projected/b34e60ff-e00e-485a-b7e0-1dded6c68091-kube-api-access-ghsjc\") pod \"route-controller-manager-6576b87f9c-tffxt\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.678053 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.688924 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:56:43 crc kubenswrapper[4712]: W0130 16:56:43.693398 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49e70831_c29b_4e74_bdda_aa83c22c6527.slice/crio-4fba0e5df2ebae3ee1e590fa9248631966b25f624bc2c051b91d03c377c43d0f WatchSource:0}: Error finding container 4fba0e5df2ebae3ee1e590fa9248631966b25f624bc2c051b91d03c377c43d0f: Status 404 returned error can't find the container with id 4fba0e5df2ebae3ee1e590fa9248631966b25f624bc2c051b91d03c377c43d0f Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.724408 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.732541 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.741186 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.749476 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.757690 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.766580 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-gzvld"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.769844 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.788949 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.809088 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.819353 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.836667 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.840290 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.852069 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.877551 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk"] Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.906196 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqw4n\" (UniqueName: \"kubernetes.io/projected/412dac4c-e4f0-4678-a113-9a241c6a9723-kube-api-access-qqw4n\") pod \"etcd-operator-b45778765-59crs\" (UID: \"412dac4c-e4f0-4678-a113-9a241c6a9723\") " pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.907926 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8zfw\" (UniqueName: \"kubernetes.io/projected/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-kube-api-access-r8zfw\") pod \"controller-manager-879f6c89f-m96vb\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.950157 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqlzw\" (UniqueName: \"kubernetes.io/projected/b23672ef-c640-4ba4-9303-26955cec21d6-kube-api-access-nqlzw\") pod \"oauth-openshift-558db77b4-t6xlq\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.969741 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.970108 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdvf\" (UniqueName: \"kubernetes.io/projected/76eb6c29-c75b-4e3a-9c21-04b0a6080fe8-kube-api-access-5qdvf\") pod \"console-operator-58897d9998-t468b\" (UID: \"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8\") " pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.991124 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:56:43 crc kubenswrapper[4712]: I0130 16:56:43.991585 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.009490 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.018993 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.033168 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.040130 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.049133 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.069354 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.088579 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.094015 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.109485 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.129253 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.149762 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.165602 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-jx2s9"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.168677 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.188638 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: W0130 16:56:44.194325 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43a0a350_8151_4bcd_8d1e_1c534e291152.slice/crio-b5e4e87cf26098e3d641adf816bd952f97471b61ba83775597821928050a9200 WatchSource:0}: Error finding container b5e4e87cf26098e3d641adf816bd952f97471b61ba83775597821928050a9200: Status 404 returned error can't find the container with id b5e4e87cf26098e3d641adf816bd952f97471b61ba83775597821928050a9200 Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.211017 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.228380 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.264973 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.268912 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.273446 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.294164 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.313638 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-5xwgj"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.317482 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.335069 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.352140 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.368567 4712 request.go:700] Waited for 1.007915193s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.370092 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.380161 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.389951 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.403931 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.412284 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.429407 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:56:44 crc kubenswrapper[4712]: W0130 16:56:44.438964 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28bc8c3c_aa7e_4430_acf7_30ddf2ed9e24.slice/crio-bcc3112d884ad52c0e8735db70042e85db9589a71d2b5f087acd8acd9dc9f67e WatchSource:0}: Error finding container bcc3112d884ad52c0e8735db70042e85db9589a71d2b5f087acd8acd9dc9f67e: Status 404 returned error can't find the container with id bcc3112d884ad52c0e8735db70042e85db9589a71d2b5f087acd8acd9dc9f67e Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.458241 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.471463 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.482456 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t6xlq"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.489849 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.505586 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m96vb"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.510134 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.529107 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:56:44 crc kubenswrapper[4712]: W0130 16:56:44.551121 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb23672ef_c640_4ba4_9303_26955cec21d6.slice/crio-fa43ed8af52910a961c7ebcfaee77aacda9b0520113f9d8d7e59c50aa6807b2c WatchSource:0}: Error finding container fa43ed8af52910a961c7ebcfaee77aacda9b0520113f9d8d7e59c50aa6807b2c: Status 404 returned error can't find the container with id fa43ed8af52910a961c7ebcfaee77aacda9b0520113f9d8d7e59c50aa6807b2c Jan 30 16:56:44 crc kubenswrapper[4712]: W0130 16:56:44.551617 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69f69514_00d4_42fd_b010_2b6e4bc7b2fe.slice/crio-5c31555924514a2320bb19fbe9f8ab227decea537868f7908832d7c4673cc5aa WatchSource:0}: Error finding container 5c31555924514a2320bb19fbe9f8ab227decea537868f7908832d7c4673cc5aa: Status 404 returned error can't find the container with id 5c31555924514a2320bb19fbe9f8ab227decea537868f7908832d7c4673cc5aa Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.552228 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.557710 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-59crs"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.559650 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" event={"ID":"7a35d8b7-3b76-473d-b380-9db623f234f2","Type":"ContainerStarted","Data":"b89987ac7b52aed7919649087cad839f0894f6b556a8d9229d52892617e6de8b"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.559703 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" event={"ID":"7a35d8b7-3b76-473d-b380-9db623f234f2","Type":"ContainerStarted","Data":"1adfe6af6b09a393d9b5700f961151d41d85fdfb5efd12407fb062f6ab51d0ea"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.559723 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" event={"ID":"7a35d8b7-3b76-473d-b380-9db623f234f2","Type":"ContainerStarted","Data":"12a541880ad984166f2cbbef5f64c23729a176861e35dd8df0a9227a8d046203"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.568680 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" event={"ID":"b34e60ff-e00e-485a-b7e0-1dded6c68091","Type":"ContainerStarted","Data":"53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.568744 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" event={"ID":"b34e60ff-e00e-485a-b7e0-1dded6c68091","Type":"ContainerStarted","Data":"55c8570b072853d8293cba09a2623036333c1069a40ae777b1244be2c25922e3"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.570234 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.571812 4712 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-tffxt container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.571878 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" podUID="b34e60ff-e00e-485a-b7e0-1dded6c68091" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.573230 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.591345 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.593997 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" event={"ID":"49e70831-c29b-4e74-bdda-aa83c22c6527","Type":"ContainerStarted","Data":"2aa4c6c9367a4d04bea9d892f3e82508681a644db661c49c1153efc59538c675"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.594040 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" event={"ID":"49e70831-c29b-4e74-bdda-aa83c22c6527","Type":"ContainerStarted","Data":"4fba0e5df2ebae3ee1e590fa9248631966b25f624bc2c051b91d03c377c43d0f"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.602828 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" event={"ID":"29e89539-b787-4a7e-a75a-9dd9216b3649","Type":"ContainerStarted","Data":"cb0a54384a7dade91aa9a35edf1526f2fb99ce2b44cc3669e2aee091215fb6f2"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.602894 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" event={"ID":"29e89539-b787-4a7e-a75a-9dd9216b3649","Type":"ContainerStarted","Data":"5c7ca710a8cd9301d2fab91644f4b1951c0f2412919779ae6bfb30a78ff8566a"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.611742 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.613152 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jx2s9" event={"ID":"43a0a350-8151-4bcd-8d1e-1c534e291152","Type":"ContainerStarted","Data":"514051fc967f6510aab225a39620ee09075374976dab5efe5c13ecdd3cd0bef3"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.613189 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jx2s9" event={"ID":"43a0a350-8151-4bcd-8d1e-1c534e291152","Type":"ContainerStarted","Data":"b5e4e87cf26098e3d641adf816bd952f97471b61ba83775597821928050a9200"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.615850 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" event={"ID":"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24","Type":"ContainerStarted","Data":"bcc3112d884ad52c0e8735db70042e85db9589a71d2b5f087acd8acd9dc9f67e"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.622116 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" event={"ID":"0b4d1852-9507-412e-842e-d9dbd886e79d","Type":"ContainerStarted","Data":"7be1aa4d1aea5694211d71e9f6f97aa0317577407454ad5482823c4c325d69b3"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.623354 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" event={"ID":"b23672ef-c640-4ba4-9303-26955cec21d6","Type":"ContainerStarted","Data":"fa43ed8af52910a961c7ebcfaee77aacda9b0520113f9d8d7e59c50aa6807b2c"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.625283 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" event={"ID":"2eaa69c4-271a-48de-a917-4ab79dcb2ae4","Type":"ContainerStarted","Data":"4fd8acd6d6e4edebb8e50d75a1f8f0e62ac76aca9e5513d945cb1f298ec9b548"} Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.632448 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.651017 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.669706 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.690064 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.709699 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.729285 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.790315 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.790441 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.790883 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.809247 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.820627 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-t468b"] Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.829210 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.850447 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.870252 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.890261 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.912372 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.936854 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.949980 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.971869 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:56:44 crc kubenswrapper[4712]: I0130 16:56:44.989600 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.008564 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.031264 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.050637 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.069219 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.089597 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.109108 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.129224 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.149071 4712 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.169495 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.189254 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.209350 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.229725 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.249289 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.270294 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.289851 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.329637 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5sr\" (UniqueName: \"kubernetes.io/projected/e443a952-c1e3-42b2-8a58-f29416ff11dd-kube-api-access-sr5sr\") pod \"openshift-controller-manager-operator-756b6f6bc6-lnrqz\" (UID: \"e443a952-c1e3-42b2-8a58-f29416ff11dd\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.356055 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.363648 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v4d6\" (UniqueName: \"kubernetes.io/projected/1d5152b6-8b35-4afc-ad62-9e3d063adf4e-kube-api-access-4v4d6\") pod \"cluster-image-registry-operator-dc59b4c8b-tpncz\" (UID: \"1d5152b6-8b35-4afc-ad62-9e3d063adf4e\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.384129 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxjjv\" (UniqueName: \"kubernetes.io/projected/587ef6e0-541f-4139-be7d-d6d4a9e8244b-kube-api-access-qxjjv\") pod \"dns-operator-744455d44c-glzbp\" (UID: \"587ef6e0-541f-4139-be7d-d6d4a9e8244b\") " pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.387900 4712 request.go:700] Waited for 1.93654844s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/default/token Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.406847 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75jcb\" (UniqueName: \"kubernetes.io/projected/48626025-5e2a-47c8-b317-bcbada105e87-kube-api-access-75jcb\") pod \"downloads-7954f5f757-27wq6\" (UID: \"48626025-5e2a-47c8-b317-bcbada105e87\") " pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.423243 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvsw8\" (UniqueName: \"kubernetes.io/projected/a5836457-3db5-41ec-b036-057186d44de8-kube-api-access-jvsw8\") pod \"openshift-config-operator-7777fb866f-6lnp9\" (UID: \"a5836457-3db5-41ec-b036-057186d44de8\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.428203 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.448458 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.469867 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.489115 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.504247 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.529093 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.540208 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.547046 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.553998 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599745 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-config\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599787 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-encryption-config\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599876 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599900 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8d25e57-f72a-43c8-a3ce-892bd95e3493-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599921 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8d25e57-f72a-43c8-a3ce-892bd95e3493-config\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599940 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01eaec98-2b0a-46a5-a9fe-d2a01d486723-audit-dir\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.599961 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01445471-4b9a-4180-aeff-e6eb332f974c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600006 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv8l5\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-kube-api-access-gv8l5\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600461 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-audit\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600630 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-etcd-serving-ca\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600657 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01eaec98-2b0a-46a5-a9fe-d2a01d486723-node-pullsecrets\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600680 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fllc\" (UniqueName: \"kubernetes.io/projected/01eaec98-2b0a-46a5-a9fe-d2a01d486723-kube-api-access-8fllc\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600833 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-certificates\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600861 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-trusted-ca\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.600945 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-bound-sa-token\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601054 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-tls\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601076 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8d25e57-f72a-43c8-a3ce-892bd95e3493-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601135 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-etcd-client\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601206 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-image-import-ca\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601247 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-serving-cert\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601291 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01445471-4b9a-4180-aeff-e6eb332f974c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601335 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601381 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-trusted-ca-bundle\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601404 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01445471-4b9a-4180-aeff-e6eb332f974c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.601553 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: E0130 16:56:45.606723 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.106706946 +0000 UTC m=+143.013716505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.631486 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" event={"ID":"2eaa69c4-271a-48de-a917-4ab79dcb2ae4","Type":"ContainerStarted","Data":"63100a4457464a922687c9f3a510842f4ea7e58e0299a28c4a6ebb7bbc1d09ee"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.644193 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" event={"ID":"49e70831-c29b-4e74-bdda-aa83c22c6527","Type":"ContainerStarted","Data":"f2704e8770eeac7d3ad961a7c2e73d41a3804224c2357c447638c25aa187e7da"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.688748 4712 generic.go:334] "Generic (PLEG): container finished" podID="28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24" containerID="9a4d15b2c9f858c6c9eea73f9c837c63a3bdb7d012b283e79c83cec7d7855eeb" exitCode=0 Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.691937 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" event={"ID":"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24","Type":"ContainerDied","Data":"9a4d15b2c9f858c6c9eea73f9c837c63a3bdb7d012b283e79c83cec7d7855eeb"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702285 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702463 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702499 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8d25e57-f72a-43c8-a3ce-892bd95e3493-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702521 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8d25e57-f72a-43c8-a3ce-892bd95e3493-config\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702544 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01eaec98-2b0a-46a5-a9fe-d2a01d486723-audit-dir\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702566 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e966532-0698-481e-99b3-d1de70be4ecf-metrics-tls\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702585 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-mountpoint-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702616 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv8l5\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-kube-api-access-gv8l5\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702639 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/884f5245-fc6d-42b5-83c2-e3373788e91b-service-ca-bundle\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702666 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-audit\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702704 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e01529-72ef-487b-ac85-e90905240355-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702726 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e966532-0698-481e-99b3-d1de70be4ecf-config-volume\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702748 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-metrics-certs\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702770 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c28e05d-9fe2-414f-bae8-2a8f577af72f-metrics-tls\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702789 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/68eec877-dde8-4b0b-8e78-53a70af78240-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xq27f\" (UID: \"68eec877-dde8-4b0b-8e78-53a70af78240\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702833 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c28e05d-9fe2-414f-bae8-2a8f577af72f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702853 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa965c86-cdca-49f6-9652-505d41e07f4e-images\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702871 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tqpc\" (UniqueName: \"kubernetes.io/projected/31abddf9-a5ab-425d-b671-d40c00ced75b-kube-api-access-8tqpc\") pod \"ingress-canary-99tzw\" (UID: \"31abddf9-a5ab-425d-b671-d40c00ced75b\") " pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702892 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-plugins-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702934 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-trusted-ca\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702959 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31abddf9-a5ab-425d-b671-d40c00ced75b-cert\") pod \"ingress-canary-99tzw\" (UID: \"31abddf9-a5ab-425d-b671-d40c00ced75b\") " pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.702979 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c28e05d-9fe2-414f-bae8-2a8f577af72f-trusted-ca\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703002 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5080a36c-8d1f-4244-921c-314ac983a7c9-config\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703033 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbnqt\" (UniqueName: \"kubernetes.io/projected/5e966532-0698-481e-99b3-d1de70be4ecf-kube-api-access-sbnqt\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703053 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npnck\" (UniqueName: \"kubernetes.io/projected/9c28e05d-9fe2-414f-bae8-2a8f577af72f-kube-api-access-npnck\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703074 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa965c86-cdca-49f6-9652-505d41e07f4e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703093 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/419c8d04-c64a-4cba-b52f-2ff3c04641e0-node-bootstrap-token\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703111 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/419c8d04-c64a-4cba-b52f-2ff3c04641e0-certs\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703136 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-signing-cabundle\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703159 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703181 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465741f6-d748-4a3a-8584-3aa2a50bcd7c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703199 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjbq\" (UniqueName: \"kubernetes.io/projected/bc36657e-ab97-4bc2-90a9-34134794c30b-kube-api-access-7sjbq\") pod \"migrator-59844c95c7-xjs5m\" (UID: \"bc36657e-ab97-4bc2-90a9-34134794c30b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703227 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89qzr\" (UniqueName: \"kubernetes.io/projected/fa965c86-cdca-49f6-9652-505d41e07f4e-kube-api-access-89qzr\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703261 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h69s\" (UniqueName: \"kubernetes.io/projected/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-kube-api-access-8h69s\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703280 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-secret-volume\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703306 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-serving-cert\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703328 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01445471-4b9a-4180-aeff-e6eb332f974c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703349 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-trusted-ca-bundle\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703369 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01445471-4b9a-4180-aeff-e6eb332f974c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703390 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2418db4-0c95-43a9-973e-e2b6c6170198-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g4p8m\" (UID: \"e2418db4-0c95-43a9-973e-e2b6c6170198\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703411 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9fce980-8342-4614-8cfe-c8757df49d74-tmpfs\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703433 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-csi-data-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703453 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dh4b\" (UniqueName: \"kubernetes.io/projected/fd5b1abd-3085-42f2-94a1-a9f06129017c-kube-api-access-5dh4b\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703479 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hpns\" (UniqueName: \"kubernetes.io/projected/d631ea54-82a0-4985-bfe7-776d4764e85e-kube-api-access-9hpns\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703500 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdgmm\" (UniqueName: \"kubernetes.io/projected/e2418db4-0c95-43a9-973e-e2b6c6170198-kube-api-access-bdgmm\") pod \"multus-admission-controller-857f4d67dd-g4p8m\" (UID: \"e2418db4-0c95-43a9-973e-e2b6c6170198\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703520 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9fce980-8342-4614-8cfe-c8757df49d74-apiservice-cert\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703551 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62kzl\" (UniqueName: \"kubernetes.io/projected/c9e01529-72ef-487b-ac85-e90905240355-kube-api-access-62kzl\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703581 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703603 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01445471-4b9a-4180-aeff-e6eb332f974c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703628 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703650 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj5jz\" (UniqueName: \"kubernetes.io/projected/d9fce980-8342-4614-8cfe-c8757df49d74-kube-api-access-fj5jz\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703677 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/465741f6-d748-4a3a-8584-3aa2a50bcd7c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703701 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-default-certificate\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703726 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gj64g\" (UniqueName: \"kubernetes.io/projected/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-kube-api-access-gj64g\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703775 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-config-volume\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703817 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-etcd-serving-ca\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703843 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01eaec98-2b0a-46a5-a9fe-d2a01d486723-node-pullsecrets\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703864 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fllc\" (UniqueName: \"kubernetes.io/projected/01eaec98-2b0a-46a5-a9fe-d2a01d486723-kube-api-access-8fllc\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703886 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5080a36c-8d1f-4244-921c-314ac983a7c9-serving-cert\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703907 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/465741f6-d748-4a3a-8584-3aa2a50bcd7c-config\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703932 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9fce980-8342-4614-8cfe-c8757df49d74-webhook-cert\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703953 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftzc9\" (UniqueName: \"kubernetes.io/projected/16d2b99c-7fc4-4d10-8ebc-1e726485e354-kube-api-access-ftzc9\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703978 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m62mz\" (UniqueName: \"kubernetes.io/projected/5080a36c-8d1f-4244-921c-314ac983a7c9-kube-api-access-m62mz\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.703996 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-signing-key\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704018 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9e01529-72ef-487b-ac85-e90905240355-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704042 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-certificates\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704063 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-stats-auth\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704087 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-bound-sa-token\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704108 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lr92r\" (UniqueName: \"kubernetes.io/projected/7d104d8e-f081-42a2-997e-4b27951d3e2c-kube-api-access-lr92r\") pod \"control-plane-machine-set-operator-78cbb6b69f-gfwsl\" (UID: \"7d104d8e-f081-42a2-997e-4b27951d3e2c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704152 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fd5b1abd-3085-42f2-94a1-a9f06129017c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704175 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-tls\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704192 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8d25e57-f72a-43c8-a3ce-892bd95e3493-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704210 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-etcd-client\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704231 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-image-import-ca\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704251 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa965c86-cdca-49f6-9652-505d41e07f4e-proxy-tls\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704271 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w44jl\" (UniqueName: \"kubernetes.io/projected/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-kube-api-access-w44jl\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704295 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704315 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-socket-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704347 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fd5b1abd-3085-42f2-94a1-a9f06129017c-srv-cert\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704367 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bhvt\" (UniqueName: \"kubernetes.io/projected/884f5245-fc6d-42b5-83c2-e3373788e91b-kube-api-access-4bhvt\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704389 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nc9b\" (UniqueName: \"kubernetes.io/projected/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-kube-api-access-2nc9b\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704413 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/16d2b99c-7fc4-4d10-8ebc-1e726485e354-profile-collector-cert\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704441 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nsl9\" (UniqueName: \"kubernetes.io/projected/419c8d04-c64a-4cba-b52f-2ff3c04641e0-kube-api-access-4nsl9\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704460 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/16d2b99c-7fc4-4d10-8ebc-1e726485e354-srv-cert\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704482 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4fqg\" (UniqueName: \"kubernetes.io/projected/68eec877-dde8-4b0b-8e78-53a70af78240-kube-api-access-z4fqg\") pod \"package-server-manager-789f6589d5-xq27f\" (UID: \"68eec877-dde8-4b0b-8e78-53a70af78240\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704501 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-registration-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704521 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-proxy-tls\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704542 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-config\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704563 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-encryption-config\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.704589 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7d104d8e-f081-42a2-997e-4b27951d3e2c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gfwsl\" (UID: \"7d104d8e-f081-42a2-997e-4b27951d3e2c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:45 crc kubenswrapper[4712]: E0130 16:56:45.704737 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.204717904 +0000 UTC m=+143.111727383 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.705721 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8d25e57-f72a-43c8-a3ce-892bd95e3493-config\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.705782 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/01eaec98-2b0a-46a5-a9fe-d2a01d486723-audit-dir\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.706756 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-audit\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.706973 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01445471-4b9a-4180-aeff-e6eb332f974c-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.707571 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-etcd-serving-ca\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.707638 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/01eaec98-2b0a-46a5-a9fe-d2a01d486723-node-pullsecrets\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.707663 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-config\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.708580 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-trusted-ca\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.709523 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" event={"ID":"0b4d1852-9507-412e-842e-d9dbd886e79d","Type":"ContainerStarted","Data":"f989e7498aa2c5b93589d6cc08b3ffddb9921577ec581cbb4be79a5dc32b4422"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.709562 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" event={"ID":"0b4d1852-9507-412e-842e-d9dbd886e79d","Type":"ContainerStarted","Data":"f9f731d9520f59f3291cf0d93e943b25e4c8fbd3c545eb226160ed422432e767"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.710536 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-tls\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.710842 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-ca-trust-extracted\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.712071 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-certificates\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.713827 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-trusted-ca-bundle\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.714187 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" event={"ID":"b23672ef-c640-4ba4-9303-26955cec21d6","Type":"ContainerStarted","Data":"6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.715072 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.717165 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a8d25e57-f72a-43c8-a3ce-892bd95e3493-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.723555 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-installation-pull-secrets\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.724166 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01445471-4b9a-4180-aeff-e6eb332f974c-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.724403 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" event={"ID":"69f69514-00d4-42fd-b010-2b6e4bc7b2fe","Type":"ContainerStarted","Data":"00eab9e34ecc007170db0d0e33bf5325bbfa75bdb29c4e7a6e09013caf180b29"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.724437 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" event={"ID":"69f69514-00d4-42fd-b010-2b6e4bc7b2fe","Type":"ContainerStarted","Data":"5c31555924514a2320bb19fbe9f8ab227decea537868f7908832d7c4673cc5aa"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.725681 4712 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-t6xlq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" start-of-body= Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.725735 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" podUID="b23672ef-c640-4ba4-9303-26955cec21d6" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.6:6443/healthz\": dial tcp 10.217.0.6:6443: connect: connection refused" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.730259 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.731078 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/01eaec98-2b0a-46a5-a9fe-d2a01d486723-image-import-ca\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.733488 4712 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-m96vb container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.733541 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" podUID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.735131 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-etcd-client\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.737401 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-serving-cert\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.738083 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/01eaec98-2b0a-46a5-a9fe-d2a01d486723-encryption-config\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.743080 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-bound-sa-token\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.778530 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" event={"ID":"412dac4c-e4f0-4678-a113-9a241c6a9723","Type":"ContainerStarted","Data":"b0683945803d55e2914a521244601bec946ce4ad394d68dec9b9e4f4e3a7f461"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.778893 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" event={"ID":"412dac4c-e4f0-4678-a113-9a241c6a9723","Type":"ContainerStarted","Data":"68fed2228af5a610a2296b747d8cbd778c0318b57a2814bccba072220b2b1330"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.784566 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv8l5\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-kube-api-access-gv8l5\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.808316 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fllc\" (UniqueName: \"kubernetes.io/projected/01eaec98-2b0a-46a5-a9fe-d2a01d486723-kube-api-access-8fllc\") pod \"apiserver-76f77b778f-56p67\" (UID: \"01eaec98-2b0a-46a5-a9fe-d2a01d486723\") " pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814578 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814611 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/465741f6-d748-4a3a-8584-3aa2a50bcd7c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814629 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fj5jz\" (UniqueName: \"kubernetes.io/projected/d9fce980-8342-4614-8cfe-c8757df49d74-kube-api-access-fj5jz\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814654 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-default-certificate\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814672 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gj64g\" (UniqueName: \"kubernetes.io/projected/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-kube-api-access-gj64g\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814689 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-config-volume\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814712 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/465741f6-d748-4a3a-8584-3aa2a50bcd7c-config\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814728 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5080a36c-8d1f-4244-921c-314ac983a7c9-serving-cert\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814744 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9fce980-8342-4614-8cfe-c8757df49d74-webhook-cert\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814779 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ftzc9\" (UniqueName: \"kubernetes.io/projected/16d2b99c-7fc4-4d10-8ebc-1e726485e354-kube-api-access-ftzc9\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814819 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-signing-key\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814845 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m62mz\" (UniqueName: \"kubernetes.io/projected/5080a36c-8d1f-4244-921c-314ac983a7c9-kube-api-access-m62mz\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814861 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-stats-auth\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814875 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9e01529-72ef-487b-ac85-e90905240355-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814909 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lr92r\" (UniqueName: \"kubernetes.io/projected/7d104d8e-f081-42a2-997e-4b27951d3e2c-kube-api-access-lr92r\") pod \"control-plane-machine-set-operator-78cbb6b69f-gfwsl\" (UID: \"7d104d8e-f081-42a2-997e-4b27951d3e2c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814948 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fd5b1abd-3085-42f2-94a1-a9f06129017c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814982 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa965c86-cdca-49f6-9652-505d41e07f4e-proxy-tls\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.814997 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w44jl\" (UniqueName: \"kubernetes.io/projected/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-kube-api-access-w44jl\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815020 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-socket-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815039 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815063 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fd5b1abd-3085-42f2-94a1-a9f06129017c-srv-cert\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815077 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bhvt\" (UniqueName: \"kubernetes.io/projected/884f5245-fc6d-42b5-83c2-e3373788e91b-kube-api-access-4bhvt\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815092 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nc9b\" (UniqueName: \"kubernetes.io/projected/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-kube-api-access-2nc9b\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815117 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/16d2b99c-7fc4-4d10-8ebc-1e726485e354-profile-collector-cert\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815136 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nsl9\" (UniqueName: \"kubernetes.io/projected/419c8d04-c64a-4cba-b52f-2ff3c04641e0-kube-api-access-4nsl9\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815150 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/16d2b99c-7fc4-4d10-8ebc-1e726485e354-srv-cert\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815187 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-proxy-tls\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815203 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4fqg\" (UniqueName: \"kubernetes.io/projected/68eec877-dde8-4b0b-8e78-53a70af78240-kube-api-access-z4fqg\") pod \"package-server-manager-789f6589d5-xq27f\" (UID: \"68eec877-dde8-4b0b-8e78-53a70af78240\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815220 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-registration-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815235 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7d104d8e-f081-42a2-997e-4b27951d3e2c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gfwsl\" (UID: \"7d104d8e-f081-42a2-997e-4b27951d3e2c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815252 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e966532-0698-481e-99b3-d1de70be4ecf-metrics-tls\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815266 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-mountpoint-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815281 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815314 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/884f5245-fc6d-42b5-83c2-e3373788e91b-service-ca-bundle\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815368 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e01529-72ef-487b-ac85-e90905240355-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815410 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e966532-0698-481e-99b3-d1de70be4ecf-config-volume\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815425 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-metrics-certs\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815451 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c28e05d-9fe2-414f-bae8-2a8f577af72f-metrics-tls\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815466 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/68eec877-dde8-4b0b-8e78-53a70af78240-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xq27f\" (UID: \"68eec877-dde8-4b0b-8e78-53a70af78240\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815490 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c28e05d-9fe2-414f-bae8-2a8f577af72f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815507 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tqpc\" (UniqueName: \"kubernetes.io/projected/31abddf9-a5ab-425d-b671-d40c00ced75b-kube-api-access-8tqpc\") pod \"ingress-canary-99tzw\" (UID: \"31abddf9-a5ab-425d-b671-d40c00ced75b\") " pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815530 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa965c86-cdca-49f6-9652-505d41e07f4e-images\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815544 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-plugins-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815567 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31abddf9-a5ab-425d-b671-d40c00ced75b-cert\") pod \"ingress-canary-99tzw\" (UID: \"31abddf9-a5ab-425d-b671-d40c00ced75b\") " pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815591 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c28e05d-9fe2-414f-bae8-2a8f577af72f-trusted-ca\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815605 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5080a36c-8d1f-4244-921c-314ac983a7c9-config\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815629 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/419c8d04-c64a-4cba-b52f-2ff3c04641e0-certs\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815646 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbnqt\" (UniqueName: \"kubernetes.io/projected/5e966532-0698-481e-99b3-d1de70be4ecf-kube-api-access-sbnqt\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815662 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npnck\" (UniqueName: \"kubernetes.io/projected/9c28e05d-9fe2-414f-bae8-2a8f577af72f-kube-api-access-npnck\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815676 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa965c86-cdca-49f6-9652-505d41e07f4e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815690 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/419c8d04-c64a-4cba-b52f-2ff3c04641e0-node-bootstrap-token\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815713 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjbq\" (UniqueName: \"kubernetes.io/projected/bc36657e-ab97-4bc2-90a9-34134794c30b-kube-api-access-7sjbq\") pod \"migrator-59844c95c7-xjs5m\" (UID: \"bc36657e-ab97-4bc2-90a9-34134794c30b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815738 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-signing-cabundle\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815753 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815770 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465741f6-d748-4a3a-8584-3aa2a50bcd7c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815827 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89qzr\" (UniqueName: \"kubernetes.io/projected/fa965c86-cdca-49f6-9652-505d41e07f4e-kube-api-access-89qzr\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815853 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h69s\" (UniqueName: \"kubernetes.io/projected/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-kube-api-access-8h69s\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815878 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-secret-volume\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815895 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2418db4-0c95-43a9-973e-e2b6c6170198-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g4p8m\" (UID: \"e2418db4-0c95-43a9-973e-e2b6c6170198\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815918 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9fce980-8342-4614-8cfe-c8757df49d74-tmpfs\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815952 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-csi-data-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815968 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dh4b\" (UniqueName: \"kubernetes.io/projected/fd5b1abd-3085-42f2-94a1-a9f06129017c-kube-api-access-5dh4b\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.815993 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hpns\" (UniqueName: \"kubernetes.io/projected/d631ea54-82a0-4985-bfe7-776d4764e85e-kube-api-access-9hpns\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.816011 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdgmm\" (UniqueName: \"kubernetes.io/projected/e2418db4-0c95-43a9-973e-e2b6c6170198-kube-api-access-bdgmm\") pod \"multus-admission-controller-857f4d67dd-g4p8m\" (UID: \"e2418db4-0c95-43a9-973e-e2b6c6170198\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.816035 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9fce980-8342-4614-8cfe-c8757df49d74-apiservice-cert\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.816059 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62kzl\" (UniqueName: \"kubernetes.io/projected/c9e01529-72ef-487b-ac85-e90905240355-kube-api-access-62kzl\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.818079 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-config-volume\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.818648 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t468b" event={"ID":"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8","Type":"ContainerStarted","Data":"27345c39fd68f1d8c856928128331ae1e92460e8b0b0732ecbbedf3158b5edc7"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.818675 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.818685 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-t468b" event={"ID":"76eb6c29-c75b-4e3a-9c21-04b0a6080fe8","Type":"ContainerStarted","Data":"5158aa1c5f433c86685d5d3dd4d663c0591a08c73e29ad3eb0313041ff09f9ca"} Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.823625 4712 patch_prober.go:28] interesting pod/console-operator-58897d9998-t468b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.823673 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-t468b" podUID="76eb6c29-c75b-4e3a-9c21-04b0a6080fe8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.825609 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.830187 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/465741f6-d748-4a3a-8584-3aa2a50bcd7c-config\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.835764 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5080a36c-8d1f-4244-921c-314ac983a7c9-serving-cert\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.851062 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-plugins-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.852640 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-registration-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.854712 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/465741f6-d748-4a3a-8584-3aa2a50bcd7c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.855786 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e966532-0698-481e-99b3-d1de70be4ecf-config-volume\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.856355 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-socket-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: E0130 16:56:45.856572 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.356556033 +0000 UTC m=+143.263565512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.863565 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a8d25e57-f72a-43c8-a3ce-892bd95e3493-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-njkcm\" (UID: \"a8d25e57-f72a-43c8-a3ce-892bd95e3493\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.864042 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9e01529-72ef-487b-ac85-e90905240355-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.864399 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-default-certificate\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.865690 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e01529-72ef-487b-ac85-e90905240355-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.865936 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.870792 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d9fce980-8342-4614-8cfe-c8757df49d74-webhook-cert\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.872537 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fd5b1abd-3085-42f2-94a1-a9f06129017c-profile-collector-cert\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.873511 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.873557 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-mountpoint-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.875505 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/884f5245-fc6d-42b5-83c2-e3373788e91b-service-ca-bundle\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.879543 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5080a36c-8d1f-4244-921c-314ac983a7c9-config\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.880297 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/fa965c86-cdca-49f6-9652-505d41e07f4e-images\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.883481 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/d631ea54-82a0-4985-bfe7-776d4764e85e-csi-data-dir\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.884203 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-stats-auth\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.884511 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/01445471-4b9a-4180-aeff-e6eb332f974c-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-gx5jb\" (UID: \"01445471-4b9a-4180-aeff-e6eb332f974c\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.884924 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7d104d8e-f081-42a2-997e-4b27951d3e2c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-gfwsl\" (UID: \"7d104d8e-f081-42a2-997e-4b27951d3e2c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.887401 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9c28e05d-9fe2-414f-bae8-2a8f577af72f-metrics-tls\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.891578 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/d9fce980-8342-4614-8cfe-c8757df49d74-tmpfs\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.892440 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-signing-cabundle\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.892778 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-secret-volume\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.892846 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fa965c86-cdca-49f6-9652-505d41e07f4e-auth-proxy-config\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.893154 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.893435 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/e2418db4-0c95-43a9-973e-e2b6c6170198-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-g4p8m\" (UID: \"e2418db4-0c95-43a9-973e-e2b6c6170198\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.895212 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e966532-0698-481e-99b3-d1de70be4ecf-metrics-tls\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.895623 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/31abddf9-a5ab-425d-b671-d40c00ced75b-cert\") pod \"ingress-canary-99tzw\" (UID: \"31abddf9-a5ab-425d-b671-d40c00ced75b\") " pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.896036 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-signing-key\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.903414 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/16d2b99c-7fc4-4d10-8ebc-1e726485e354-profile-collector-cert\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.905065 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/419c8d04-c64a-4cba-b52f-2ff3c04641e0-node-bootstrap-token\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.916586 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.921637 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: E0130 16:56:45.922982 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.422965837 +0000 UTC m=+143.329975306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.942122 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.928768 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/419c8d04-c64a-4cba-b52f-2ff3c04641e0-certs\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.929189 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/16d2b99c-7fc4-4d10-8ebc-1e726485e354-srv-cert\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.929807 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fd5b1abd-3085-42f2-94a1-a9f06129017c-srv-cert\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.930547 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/884f5245-fc6d-42b5-83c2-e3373788e91b-metrics-certs\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.931612 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62kzl\" (UniqueName: \"kubernetes.io/projected/c9e01529-72ef-487b-ac85-e90905240355-kube-api-access-62kzl\") pod \"marketplace-operator-79b997595-v2t5z\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.935352 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-proxy-tls\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.936395 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lr92r\" (UniqueName: \"kubernetes.io/projected/7d104d8e-f081-42a2-997e-4b27951d3e2c-kube-api-access-lr92r\") pod \"control-plane-machine-set-operator-78cbb6b69f-gfwsl\" (UID: \"7d104d8e-f081-42a2-997e-4b27951d3e2c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.941347 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d9fce980-8342-4614-8cfe-c8757df49d74-apiservice-cert\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.942361 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9c28e05d-9fe2-414f-bae8-2a8f577af72f-trusted-ca\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.928402 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fa965c86-cdca-49f6-9652-505d41e07f4e-proxy-tls\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:45 crc kubenswrapper[4712]: E0130 16:56:45.943179 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.443167015 +0000 UTC m=+143.350176484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.947048 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/68eec877-dde8-4b0b-8e78-53a70af78240-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-xq27f\" (UID: \"68eec877-dde8-4b0b-8e78-53a70af78240\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.957262 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj5jz\" (UniqueName: \"kubernetes.io/projected/d9fce980-8342-4614-8cfe-c8757df49d74-kube-api-access-fj5jz\") pod \"packageserver-d55dfcdfc-dg9bq\" (UID: \"d9fce980-8342-4614-8cfe-c8757df49d74\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.977242 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gj64g\" (UniqueName: \"kubernetes.io/projected/2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb-kube-api-access-gj64g\") pod \"kube-storage-version-migrator-operator-b67b599dd-rwrnm\" (UID: \"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:45 crc kubenswrapper[4712]: I0130 16:56:45.982486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4fqg\" (UniqueName: \"kubernetes.io/projected/68eec877-dde8-4b0b-8e78-53a70af78240-kube-api-access-z4fqg\") pod \"package-server-manager-789f6589d5-xq27f\" (UID: \"68eec877-dde8-4b0b-8e78-53a70af78240\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.002072 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.005858 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ftzc9\" (UniqueName: \"kubernetes.io/projected/16d2b99c-7fc4-4d10-8ebc-1e726485e354-kube-api-access-ftzc9\") pod \"catalog-operator-68c6474976-swvjp\" (UID: \"16d2b99c-7fc4-4d10-8ebc-1e726485e354\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.013844 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m62mz\" (UniqueName: \"kubernetes.io/projected/5080a36c-8d1f-4244-921c-314ac983a7c9-kube-api-access-m62mz\") pod \"service-ca-operator-777779d784-bkkv5\" (UID: \"5080a36c-8d1f-4244-921c-314ac983a7c9\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.014573 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bhvt\" (UniqueName: \"kubernetes.io/projected/884f5245-fc6d-42b5-83c2-e3373788e91b-kube-api-access-4bhvt\") pod \"router-default-5444994796-qncbs\" (UID: \"884f5245-fc6d-42b5-83c2-e3373788e91b\") " pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.039294 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.046630 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-glzbp"] Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.047980 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.048384 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.548361237 +0000 UTC m=+143.455370706 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.048445 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.048843 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.548834469 +0000 UTC m=+143.455843938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.071524 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.079549 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w44jl\" (UniqueName: \"kubernetes.io/projected/c9201589-e2c1-41d7-9ab6-5f3b24dc30c6-kube-api-access-w44jl\") pod \"service-ca-9c57cc56f-2b574\" (UID: \"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6\") " pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.086809 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tqpc\" (UniqueName: \"kubernetes.io/projected/31abddf9-a5ab-425d-b671-d40c00ced75b-kube-api-access-8tqpc\") pod \"ingress-canary-99tzw\" (UID: \"31abddf9-a5ab-425d-b671-d40c00ced75b\") " pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.103702 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz"] Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.104974 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9c28e05d-9fe2-414f-bae8-2a8f577af72f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.109752 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjbq\" (UniqueName: \"kubernetes.io/projected/bc36657e-ab97-4bc2-90a9-34134794c30b-kube-api-access-7sjbq\") pod \"migrator-59844c95c7-xjs5m\" (UID: \"bc36657e-ab97-4bc2-90a9-34134794c30b\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.114507 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dh4b\" (UniqueName: \"kubernetes.io/projected/fd5b1abd-3085-42f2-94a1-a9f06129017c-kube-api-access-5dh4b\") pod \"olm-operator-6b444d44fb-8m9br\" (UID: \"fd5b1abd-3085-42f2-94a1-a9f06129017c\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.151520 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.152030 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.652007082 +0000 UTC m=+143.559016561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.153207 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-99tzw" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.158369 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdgmm\" (UniqueName: \"kubernetes.io/projected/e2418db4-0c95-43a9-973e-e2b6c6170198-kube-api-access-bdgmm\") pod \"multus-admission-controller-857f4d67dd-g4p8m\" (UID: \"e2418db4-0c95-43a9-973e-e2b6c6170198\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.184681 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hpns\" (UniqueName: \"kubernetes.io/projected/d631ea54-82a0-4985-bfe7-776d4764e85e-kube-api-access-9hpns\") pod \"csi-hostpathplugin-7h5tl\" (UID: \"d631ea54-82a0-4985-bfe7-776d4764e85e\") " pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.189364 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.189689 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.194143 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.204227 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.208482 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.246625 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.257758 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npnck\" (UniqueName: \"kubernetes.io/projected/9c28e05d-9fe2-414f-bae8-2a8f577af72f-kube-api-access-npnck\") pod \"ingress-operator-5b745b69d9-6njcq\" (UID: \"9c28e05d-9fe2-414f-bae8-2a8f577af72f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.258281 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.258651 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbnqt\" (UniqueName: \"kubernetes.io/projected/5e966532-0698-481e-99b3-d1de70be4ecf-kube-api-access-sbnqt\") pod \"dns-default-8fqqs\" (UID: \"5e966532-0698-481e-99b3-d1de70be4ecf\") " pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.268902 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.269367 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.769356256 +0000 UTC m=+143.676365715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.269519 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.275257 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.286232 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h69s\" (UniqueName: \"kubernetes.io/projected/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-kube-api-access-8h69s\") pod \"collect-profiles-29496525-j85bm\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.289728 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/465741f6-d748-4a3a-8584-3aa2a50bcd7c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-z5dx7\" (UID: \"465741f6-d748-4a3a-8584-3aa2a50bcd7c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.293134 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nsl9\" (UniqueName: \"kubernetes.io/projected/419c8d04-c64a-4cba-b52f-2ff3c04641e0-kube-api-access-4nsl9\") pod \"machine-config-server-ptv9c\" (UID: \"419c8d04-c64a-4cba-b52f-2ff3c04641e0\") " pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.300904 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89qzr\" (UniqueName: \"kubernetes.io/projected/fa965c86-cdca-49f6-9652-505d41e07f4e-kube-api-access-89qzr\") pod \"machine-config-operator-74547568cd-bd8m7\" (UID: \"fa965c86-cdca-49f6-9652-505d41e07f4e\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.304735 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-2b574" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.310752 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9"] Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.320515 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.330322 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nc9b\" (UniqueName: \"kubernetes.io/projected/39f7dec4-e247-4ccb-8c0a-b03a4de346dd-kube-api-access-2nc9b\") pod \"machine-config-controller-84d6567774-sbrnm\" (UID: \"39f7dec4-e247-4ccb-8c0a-b03a4de346dd\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.347695 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-27wq6"] Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.369654 4712 csr.go:261] certificate signing request csr-gxdww is approved, waiting to be issued Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.379943 4712 csr.go:257] certificate signing request csr-gxdww is issued Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.380228 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.380659 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.880644016 +0000 UTC m=+143.787653485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.397111 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.421571 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.422017 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-ptv9c" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.438030 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.479947 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz"] Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.484222 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.484568 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:46.984555256 +0000 UTC m=+143.891564725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.517089 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.543150 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.545847 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.557904 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.586016 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.587099 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.587599 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.087583176 +0000 UTC m=+143.994592645 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.688449 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.688835 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.188784191 +0000 UTC m=+144.095793660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.697476 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" podStartSLOduration=123.69746025 podStartE2EDuration="2m3.69746025s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:46.696029806 +0000 UTC m=+143.603039275" watchObservedRunningTime="2026-01-30 16:56:46.69746025 +0000 UTC m=+143.604469719" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.712856 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5"] Jan 30 16:56:46 crc kubenswrapper[4712]: W0130 16:56:46.761253 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode443a952_c1e3_42b2_8a58_f29416ff11dd.slice/crio-4a1928cc00748171773434bbca2f138903c5d0dd9ebc05ec2caa5239a26cb851 WatchSource:0}: Error finding container 4a1928cc00748171773434bbca2f138903c5d0dd9ebc05ec2caa5239a26cb851: Status 404 returned error can't find the container with id 4a1928cc00748171773434bbca2f138903c5d0dd9ebc05ec2caa5239a26cb851 Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.789246 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.789606 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.289593157 +0000 UTC m=+144.196602626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.842344 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerStarted","Data":"bef83ad8073315f4f58a1c3235609c28f421f51a3df0a44589419f1aeb59859f"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.844064 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" event={"ID":"587ef6e0-541f-4139-be7d-d6d4a9e8244b","Type":"ContainerStarted","Data":"fa0fb541faf4c7ac0d78dbd9672706002854b025643c81b9c6bff25620a1445c"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.863546 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" event={"ID":"5080a36c-8d1f-4244-921c-314ac983a7c9","Type":"ContainerStarted","Data":"e9af4acea855b3ad83dd33c023c97b71a63f6149ab5640330a1489d32c5f484a"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.883322 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" event={"ID":"e443a952-c1e3-42b2-8a58-f29416ff11dd","Type":"ContainerStarted","Data":"4a1928cc00748171773434bbca2f138903c5d0dd9ebc05ec2caa5239a26cb851"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.887888 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qncbs" event={"ID":"884f5245-fc6d-42b5-83c2-e3373788e91b","Type":"ContainerStarted","Data":"9783c46e66ab75068f22be05d3562282d5462fbecd6f30561bc6cb10575c4682"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.890700 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.891031 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.391020127 +0000 UTC m=+144.298029596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.904841 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-27wq6" event={"ID":"48626025-5e2a-47c8-b317-bcbada105e87","Type":"ContainerStarted","Data":"f8171de4b029d7d2540307fe7d87636f310eae60f4c8b74670de66256762d2ba"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.907979 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" event={"ID":"1d5152b6-8b35-4afc-ad62-9e3d063adf4e","Type":"ContainerStarted","Data":"b968cf4be3347fb2a074af6155c12973257eaca4759039e8e6764232bb9f4fbd"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.908024 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" event={"ID":"1d5152b6-8b35-4afc-ad62-9e3d063adf4e","Type":"ContainerStarted","Data":"b6eb079dc5f1c23b1c0c6534cc0c52dd31393770363beabc2843920d00a823b7"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.926942 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-99tzw"] Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.933776 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" event={"ID":"28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24","Type":"ContainerStarted","Data":"8e40e7002296ea1d607f89d60f894d3b8090b2b402e2a81e7520882f731a5ec9"} Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.947356 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.974011 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 16:56:46 crc kubenswrapper[4712]: I0130 16:56:46.993371 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:46 crc kubenswrapper[4712]: E0130 16:56:46.994650 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.49462213 +0000 UTC m=+144.401631679 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: W0130 16:56:47.047699 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31abddf9_a5ab_425d_b671_d40c00ced75b.slice/crio-81c875d5caf3266f40b8ac9fdaf601156f2b072c2ad08a44cb9ce82280041144 WatchSource:0}: Error finding container 81c875d5caf3266f40b8ac9fdaf601156f2b072c2ad08a44cb9ce82280041144: Status 404 returned error can't find the container with id 81c875d5caf3266f40b8ac9fdaf601156f2b072c2ad08a44cb9ce82280041144 Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.049642 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq"] Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.098626 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.098938 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.59892593 +0000 UTC m=+144.505935399 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.154136 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-56p67"] Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.200255 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.200372 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.700351801 +0000 UTC m=+144.607361270 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.200865 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.201220 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.701204612 +0000 UTC m=+144.608214081 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.209540 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-t468b" Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.303989 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.304118 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.804101947 +0000 UTC m=+144.711111416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.305284 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.305659 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.805647606 +0000 UTC m=+144.712657065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.377456 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-jx2s9" podStartSLOduration=123.37743829 podStartE2EDuration="2m3.37743829s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:47.33812564 +0000 UTC m=+144.245135109" watchObservedRunningTime="2026-01-30 16:56:47.37743829 +0000 UTC m=+144.284447759" Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.380855 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-30 16:51:46 +0000 UTC, rotation deadline is 2026-12-18 23:49:07.462149311 +0000 UTC Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.380893 4712 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7734h52m20.081260108s for next certificate rotation Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.406026 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.406459 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:47.906444731 +0000 UTC m=+144.813454190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.512353 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.512808 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.012776929 +0000 UTC m=+144.919786398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.544645 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2t5z"] Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.613482 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.614001 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.113986515 +0000 UTC m=+145.020995984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.641334 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb"] Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.718189 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.718712 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.218696766 +0000 UTC m=+145.125706235 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.794619 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" podStartSLOduration=123.794600199 podStartE2EDuration="2m3.794600199s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:47.659091765 +0000 UTC m=+144.566101234" watchObservedRunningTime="2026-01-30 16:56:47.794600199 +0000 UTC m=+144.701609668" Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.821301 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.821750 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.321724374 +0000 UTC m=+145.228733843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:47 crc kubenswrapper[4712]: I0130 16:56:47.924518 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:47 crc kubenswrapper[4712]: E0130 16:56:47.924812 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.424783415 +0000 UTC m=+145.331792884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.007424 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" podStartSLOduration=125.007405891 podStartE2EDuration="2m5.007405891s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:47.920244325 +0000 UTC m=+144.827253794" watchObservedRunningTime="2026-01-30 16:56:48.007405891 +0000 UTC m=+144.914415360" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.025077 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.025629 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.525615441 +0000 UTC m=+145.432624910 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.069036 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" event={"ID":"01445471-4b9a-4180-aeff-e6eb332f974c","Type":"ContainerStarted","Data":"3648e7e6f34d871f6dff4ba0d634bda55f644c3b2ba39aa24bdc8e5ca85fae7d"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.069083 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.070071 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" event={"ID":"d9fce980-8342-4614-8cfe-c8757df49d74","Type":"ContainerStarted","Data":"480b3b4925fcd16a800234c3a5c41abde54bd2a1d5feaf120f78deb8d4ceb84a"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.070121 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" event={"ID":"d9fce980-8342-4614-8cfe-c8757df49d74","Type":"ContainerStarted","Data":"dd3dc8b60cad713ce94638e18c948fa9179d18d0f240d685a3cff90bbacd74e8"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.071256 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.072272 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.072298 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.074627 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-t468b" podStartSLOduration=124.074617185 podStartE2EDuration="2m4.074617185s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.010608378 +0000 UTC m=+144.917617847" watchObservedRunningTime="2026-01-30 16:56:48.074617185 +0000 UTC m=+144.981626654" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.088508 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56p67" event={"ID":"01eaec98-2b0a-46a5-a9fe-d2a01d486723","Type":"ContainerStarted","Data":"1090096a36ee91c9a921fe5c0f89afe2a24f2e23af84db5ecd86e8861f476aba"} Jan 30 16:56:48 crc kubenswrapper[4712]: W0130 16:56:48.094568 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d25e57_f72a_43c8_a3ce_892bd95e3493.slice/crio-6b490c56d09a1d6d185ce5a41b9d58d754279f1e616ff1bc6a82ef16383d0230 WatchSource:0}: Error finding container 6b490c56d09a1d6d185ce5a41b9d58d754279f1e616ff1bc6a82ef16383d0230: Status 404 returned error can't find the container with id 6b490c56d09a1d6d185ce5a41b9d58d754279f1e616ff1bc6a82ef16383d0230 Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.126527 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.126960 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.626946319 +0000 UTC m=+145.533955798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.135460 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" event={"ID":"587ef6e0-541f-4139-be7d-d6d4a9e8244b","Type":"ContainerStarted","Data":"48f6a5c76663631b47405fa602405e8efff0f4f0d77d075539a2c75a4ab50d61"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.135997 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-5xwgj" podStartSLOduration=124.135983818 podStartE2EDuration="2m4.135983818s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.103632355 +0000 UTC m=+145.010641834" watchObservedRunningTime="2026-01-30 16:56:48.135983818 +0000 UTC m=+145.042993287" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.137378 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-4wk6n" podStartSLOduration=125.137370201 podStartE2EDuration="2m5.137370201s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.082575697 +0000 UTC m=+144.989585156" watchObservedRunningTime="2026-01-30 16:56:48.137370201 +0000 UTC m=+145.044379670" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.162201 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-tpncz" podStartSLOduration=124.16218436 podStartE2EDuration="2m4.16218436s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.159750512 +0000 UTC m=+145.066759981" watchObservedRunningTime="2026-01-30 16:56:48.16218436 +0000 UTC m=+145.069193829" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.180137 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ptv9c" event={"ID":"419c8d04-c64a-4cba-b52f-2ff3c04641e0","Type":"ContainerStarted","Data":"6f3e0c8357d0c262fe090d4eaa74e9b141330a5444059cfcc9fe4dec927bfa50"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.181986 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qncbs" event={"ID":"884f5245-fc6d-42b5-83c2-e3373788e91b","Type":"ContainerStarted","Data":"a5b7c3b62998a91649a4ae0c03d3b15baf9f58d81c2d2c8b873de9cf81369dfb"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.195300 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.197809 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.197851 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.207395 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-99tzw" event={"ID":"31abddf9-a5ab-425d-b671-d40c00ced75b","Type":"ContainerStarted","Data":"d938bddcf4df550f9c5e0fe4a8eff185788efc236e547a5e56389600ce82b157"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.207658 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-99tzw" event={"ID":"31abddf9-a5ab-425d-b671-d40c00ced75b","Type":"ContainerStarted","Data":"81c875d5caf3266f40b8ac9fdaf601156f2b072c2ad08a44cb9ce82280041144"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.229144 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.229472 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.729452175 +0000 UTC m=+145.636461644 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.229588 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.230327 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.730317177 +0000 UTC m=+145.637326746 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.249750 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" event={"ID":"e443a952-c1e3-42b2-8a58-f29416ff11dd","Type":"ContainerStarted","Data":"52ba2380a72c957e5b7f098f89e4f2dab4ab2a8f845aa03051904092ac401919"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.269466 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8gsmk" podStartSLOduration=124.269453112 podStartE2EDuration="2m4.269453112s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.267364621 +0000 UTC m=+145.174374090" watchObservedRunningTime="2026-01-30 16:56:48.269453112 +0000 UTC m=+145.176462581" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.280277 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" event={"ID":"c9e01529-72ef-487b-ac85-e90905240355","Type":"ContainerStarted","Data":"7d83fbc8ed27c1615dd107e5c67678f7e0f68d852ec17bc17af351848200b3ed"} Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.334385 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.336443 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.836422021 +0000 UTC m=+145.743431490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.436675 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.450051 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:48.950036185 +0000 UTC m=+145.857045654 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.456879 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" podStartSLOduration=124.456865141 podStartE2EDuration="2m4.456865141s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.455027815 +0000 UTC m=+145.362037284" watchObservedRunningTime="2026-01-30 16:56:48.456865141 +0000 UTC m=+145.363874610" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.515316 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-dktxv" podStartSLOduration=125.51529635200001 podStartE2EDuration="2m5.515296352s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.513765825 +0000 UTC m=+145.420775294" watchObservedRunningTime="2026-01-30 16:56:48.515296352 +0000 UTC m=+145.422305821" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.538615 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.539404 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.039384575 +0000 UTC m=+145.946394044 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.569348 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-2b574"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.639903 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.640923 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.140905617 +0000 UTC m=+146.047915086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.668578 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8fqqs"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.690237 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" podStartSLOduration=124.690220758 podStartE2EDuration="2m4.690220758s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.66832023 +0000 UTC m=+145.575329699" watchObservedRunningTime="2026-01-30 16:56:48.690220758 +0000 UTC m=+145.597230227" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.696611 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.744868 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.745232 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.245217358 +0000 UTC m=+146.152226827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.800624 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.812046 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.827960 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.838069 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.838365 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.848420 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.848686 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.348676048 +0000 UTC m=+146.255685517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: W0130 16:56:48.858203 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2ea3110d_1cbe_4436_bc6e_3c250fe3c4fb.slice/crio-72621a4448b91a2c1ab161694e6a7d10b08c6e5f502ffdc5d641fd34dd742b7a WatchSource:0}: Error finding container 72621a4448b91a2c1ab161694e6a7d10b08c6e5f502ffdc5d641fd34dd742b7a: Status 404 returned error can't find the container with id 72621a4448b91a2c1ab161694e6a7d10b08c6e5f502ffdc5d641fd34dd742b7a Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.862063 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-g4p8m"] Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.884024 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.937965 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-59crs" podStartSLOduration=124.937951304 podStartE2EDuration="2m4.937951304s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:48.935527666 +0000 UTC m=+145.842537135" watchObservedRunningTime="2026-01-30 16:56:48.937951304 +0000 UTC m=+145.844960773" Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.950543 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:48 crc kubenswrapper[4712]: E0130 16:56:48.950938 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.450925698 +0000 UTC m=+146.357935167 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:48 crc kubenswrapper[4712]: I0130 16:56:48.957025 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-7h5tl"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.008929 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.034667 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qncbs" podStartSLOduration=125.03464247 podStartE2EDuration="2m5.03464247s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.022308023 +0000 UTC m=+145.929317502" watchObservedRunningTime="2026-01-30 16:56:49.03464247 +0000 UTC m=+145.941651939" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.058869 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.059259 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.559232455 +0000 UTC m=+146.466241924 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.068733 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-lnrqz" podStartSLOduration=125.068716984 podStartE2EDuration="2m5.068716984s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.056237852 +0000 UTC m=+145.963247321" watchObservedRunningTime="2026-01-30 16:56:49.068716984 +0000 UTC m=+145.975726453" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.069145 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.114532 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" podStartSLOduration=125.11451371 podStartE2EDuration="2m5.11451371s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.108178517 +0000 UTC m=+146.015187986" watchObservedRunningTime="2026-01-30 16:56:49.11451371 +0000 UTC m=+146.021523179" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.133908 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.160455 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.160947 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.660928992 +0000 UTC m=+146.567938461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.204438 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:49 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:49 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:49 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.204504 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.222584 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.238455 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-ptv9c" podStartSLOduration=6.238442844 podStartE2EDuration="6.238442844s" podCreationTimestamp="2026-01-30 16:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.237264616 +0000 UTC m=+146.144274085" watchObservedRunningTime="2026-01-30 16:56:49.238442844 +0000 UTC m=+146.145452313" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.282112 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.282426 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.782411026 +0000 UTC m=+146.689420495 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.286103 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.331123 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.365210 4712 generic.go:334] "Generic (PLEG): container finished" podID="01eaec98-2b0a-46a5-a9fe-d2a01d486723" containerID="0adefdbc75c0434f3410c8312b21e31828c0f75777084bda8c9d5cd24b12cc34" exitCode=0 Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.365586 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56p67" event={"ID":"01eaec98-2b0a-46a5-a9fe-d2a01d486723","Type":"ContainerDied","Data":"0adefdbc75c0434f3410c8312b21e31828c0f75777084bda8c9d5cd24b12cc34"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.397910 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-99tzw" podStartSLOduration=6.397891017 podStartE2EDuration="6.397891017s" podCreationTimestamp="2026-01-30 16:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.397271142 +0000 UTC m=+146.304280621" watchObservedRunningTime="2026-01-30 16:56:49.397891017 +0000 UTC m=+146.304900506" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.398641 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.399013 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:49.898998844 +0000 UTC m=+146.806008313 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.399655 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" event={"ID":"d631ea54-82a0-4985-bfe7-776d4764e85e","Type":"ContainerStarted","Data":"4989770b1418b5b58ea34371cc3e07145b9b16dc3ea073da31a8cf3515081e7f"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.415363 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-bkkv5" event={"ID":"5080a36c-8d1f-4244-921c-314ac983a7c9","Type":"ContainerStarted","Data":"976e465e6facd99e4c132b4b753cc654b41db45e02cdd24059ee1f53d72e58d1"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.426776 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7"] Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.427090 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" event={"ID":"9c28e05d-9fe2-414f-bae8-2a8f577af72f","Type":"ContainerStarted","Data":"7eda46a251bcc72b3910e6e3aecd0bafdd20544f39c748f53b141aa4786019fc"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.459105 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-ptv9c" event={"ID":"419c8d04-c64a-4cba-b52f-2ff3c04641e0","Type":"ContainerStarted","Data":"bb62233804d33b4ac983e4a5497f0df6bc526562a097e796a383de3243af64cf"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.487043 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" event={"ID":"c9e01529-72ef-487b-ac85-e90905240355","Type":"ContainerStarted","Data":"2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.487663 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.501062 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v2t5z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.501138 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.501420 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.502087 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.002070864 +0000 UTC m=+146.909080333 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.536551 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" event={"ID":"587ef6e0-541f-4139-be7d-d6d4a9e8244b","Type":"ContainerStarted","Data":"a427a9dedcfab80b5d4092cd759cce6cb03d6586fe16e6f6f98b6611e31cdead"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.558158 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podStartSLOduration=125.558142879 podStartE2EDuration="2m5.558142879s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.474672082 +0000 UTC m=+146.381681551" watchObservedRunningTime="2026-01-30 16:56:49.558142879 +0000 UTC m=+146.465152348" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.584646 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" event={"ID":"fd5b1abd-3085-42f2-94a1-a9f06129017c","Type":"ContainerStarted","Data":"faec40c724c3a7c23c63e8dbe05174f9c5993d635a55664f8a413b878e622bde"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.584698 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" event={"ID":"fd5b1abd-3085-42f2-94a1-a9f06129017c","Type":"ContainerStarted","Data":"dfc9e8da0b55321553225d2f8825c674b5f9fca792c99e0a6eac0697c552ae63"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.585232 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.595057 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.595092 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.601267 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-27wq6" event={"ID":"48626025-5e2a-47c8-b317-bcbada105e87","Type":"ContainerStarted","Data":"4b1b72476b9e51b2129fe5dc1b953fd12a6e7bd7ae8b55ec86e9151d98b57eaf"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.602081 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.602246 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.603377 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.103357522 +0000 UTC m=+147.010367021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.611036 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" event={"ID":"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34","Type":"ContainerStarted","Data":"9775540a8c538d98bf635a433c1532cab51b9708e3a5c7c3828b6195788c1b4c"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.620997 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.621061 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.621777 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" event={"ID":"a8d25e57-f72a-43c8-a3ce-892bd95e3493","Type":"ContainerStarted","Data":"6d444ea639774ce3028b2d219c0a447aad2c579159b4477733f479ff3b464654"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.621854 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" event={"ID":"a8d25e57-f72a-43c8-a3ce-892bd95e3493","Type":"ContainerStarted","Data":"6b490c56d09a1d6d185ce5a41b9d58d754279f1e616ff1bc6a82ef16383d0230"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.660197 4712 generic.go:334] "Generic (PLEG): container finished" podID="a5836457-3db5-41ec-b036-057186d44de8" containerID="96ff5b13fff6cf8186f288e0f6c4a9bf42c7c7ef3c4a61351513d993e505021b" exitCode=0 Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.660530 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerDied","Data":"96ff5b13fff6cf8186f288e0f6c4a9bf42c7c7ef3c4a61351513d993e505021b"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.671724 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-glzbp" podStartSLOduration=125.671706452 podStartE2EDuration="2m5.671706452s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.671445136 +0000 UTC m=+146.578454605" watchObservedRunningTime="2026-01-30 16:56:49.671706452 +0000 UTC m=+146.578715921" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.672609 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" podStartSLOduration=125.672602805 podStartE2EDuration="2m5.672602805s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.642364664 +0000 UTC m=+146.549374143" watchObservedRunningTime="2026-01-30 16:56:49.672602805 +0000 UTC m=+146.579612274" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.686035 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" event={"ID":"16d2b99c-7fc4-4d10-8ebc-1e726485e354","Type":"ContainerStarted","Data":"73f2a5f85ffad88cf3bac408f1bb5fe7bcda9d2045c1fae22fc10857c30c44d4"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.704308 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.705697 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.205686234 +0000 UTC m=+147.112695703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.747140 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" event={"ID":"01445471-4b9a-4180-aeff-e6eb332f974c","Type":"ContainerStarted","Data":"0c06c19026002c475dd10c9b764e74c9f8d2f3a1f522c91eba0d5ac61cc74749"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.771346 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-27wq6" podStartSLOduration=125.77133128 podStartE2EDuration="2m5.77133128s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.770311896 +0000 UTC m=+146.677321365" watchObservedRunningTime="2026-01-30 16:56:49.77133128 +0000 UTC m=+146.678340749" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.788585 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2b574" event={"ID":"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6","Type":"ContainerStarted","Data":"abdb287a67c80c3c3a566f171d64ab4c0248218057f5947ee7ea76e17c777e2e"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.788627 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-2b574" event={"ID":"c9201589-e2c1-41d7-9ab6-5f3b24dc30c6","Type":"ContainerStarted","Data":"9bbba427755cb5eda64b1b087a2a8f794aed4e2c642e38ea6771fe6d6cfff0ec"} Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.806048 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.306033298 +0000 UTC m=+147.213042767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.806095 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.807628 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.807873 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.307865413 +0000 UTC m=+147.214874872 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.837566 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-njkcm" podStartSLOduration=125.837547 podStartE2EDuration="2m5.837547s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.832224431 +0000 UTC m=+146.739233900" watchObservedRunningTime="2026-01-30 16:56:49.837547 +0000 UTC m=+146.744556469" Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.856290 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" event={"ID":"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb","Type":"ContainerStarted","Data":"72621a4448b91a2c1ab161694e6a7d10b08c6e5f502ffdc5d641fd34dd742b7a"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.856329 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8fqqs" event={"ID":"5e966532-0698-481e-99b3-d1de70be4ecf","Type":"ContainerStarted","Data":"2acc6343ecc2ff6a3fecf2be259da84140f8e7cb5211d01dadc5adbf3c104179"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.886600 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" event={"ID":"7d104d8e-f081-42a2-997e-4b27951d3e2c","Type":"ContainerStarted","Data":"2dee635aaca063e8ea7a1db7ac4099034aeeb376b41fd6b111fcb477a1820a80"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.908632 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:49 crc kubenswrapper[4712]: E0130 16:56:49.910054 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.410038902 +0000 UTC m=+147.317048361 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.913833 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" event={"ID":"68eec877-dde8-4b0b-8e78-53a70af78240","Type":"ContainerStarted","Data":"837b6c4f36539d4f546899b613c7504a1df02977f7e5d3b14d7b2648a187da44"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.932453 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" event={"ID":"e2418db4-0c95-43a9-973e-e2b6c6170198","Type":"ContainerStarted","Data":"9deaec51b4775740aed1f506bd7c499ba5845ef31e13e7892c531fd5ab2a2c2c"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.935023 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" event={"ID":"39f7dec4-e247-4ccb-8c0a-b03a4de346dd","Type":"ContainerStarted","Data":"39bc502b64394e87a58af057b135d59e75c1f51f3ed77c30efcb8b4482c7b95c"} Jan 30 16:56:49 crc kubenswrapper[4712]: I0130 16:56:49.978541 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.010661 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.011056 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.511041122 +0000 UTC m=+147.418050591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.080278 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podStartSLOduration=126.080256014 podStartE2EDuration="2m6.080256014s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:49.924151243 +0000 UTC m=+146.831160712" watchObservedRunningTime="2026-01-30 16:56:50.080256014 +0000 UTC m=+146.987265483" Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.114880 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.115215 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.615190798 +0000 UTC m=+147.522200267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.115951 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.121250 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.621230364 +0000 UTC m=+147.528239833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.179177 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-2b574" podStartSLOduration=126.179157574 podStartE2EDuration="2m6.179157574s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:50.114440781 +0000 UTC m=+147.021450250" watchObservedRunningTime="2026-01-30 16:56:50.179157574 +0000 UTC m=+147.086167033" Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.221460 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.221856 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.721838155 +0000 UTC m=+147.628847624 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.222070 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:50 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:50 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:50 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.222124 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.239175 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-gx5jb" podStartSLOduration=126.239157064 podStartE2EDuration="2m6.239157064s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:50.230611617 +0000 UTC m=+147.137621086" watchObservedRunningTime="2026-01-30 16:56:50.239157064 +0000 UTC m=+147.146166533" Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.322646 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.322980 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.822965979 +0000 UTC m=+147.729975438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.423738 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.424107 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:50.924093172 +0000 UTC m=+147.831102641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.528354 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.528682 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.028671058 +0000 UTC m=+147.935680527 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.630169 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.633630 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.133588593 +0000 UTC m=+148.040598062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.734449 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.734913 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.234896672 +0000 UTC m=+148.141906141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.835865 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.836177 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.336162728 +0000 UTC m=+148.243172197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.937712 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:50 crc kubenswrapper[4712]: E0130 16:56:50.938127 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.438111782 +0000 UTC m=+148.345121251 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.941871 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.941927 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.957588 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" event={"ID":"2ea3110d-1cbe-4436-bc6e-3c250fe3c4fb","Type":"ContainerStarted","Data":"dd3ed19a19be7b743f2b6fc4a67ff408ff38c8de297642e01868fb8f82a67334"} Jan 30 16:56:50 crc kubenswrapper[4712]: I0130 16:56:50.976448 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8fqqs" event={"ID":"5e966532-0698-481e-99b3-d1de70be4ecf","Type":"ContainerStarted","Data":"be2e2307153e42dac32b15c6a21e0c2f3cc7e74111cc6c6bcad7e80169795eac"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:50.999625 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" event={"ID":"7d104d8e-f081-42a2-997e-4b27951d3e2c","Type":"ContainerStarted","Data":"c1e0d04d13fbb78789bd94f6d57c277086051ae5d067ca7734d1f4f8804fe40a"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.006442 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" event={"ID":"16d2b99c-7fc4-4d10-8ebc-1e726485e354","Type":"ContainerStarted","Data":"eed8fbb470f2bafaa86c95e930596c5285808de3cb65807bebf006a35990fc4b"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.007458 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.009889 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.009940 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.017688 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" event={"ID":"bc36657e-ab97-4bc2-90a9-34134794c30b","Type":"ContainerStarted","Data":"b52921187faf509d5f7004027ad489e4a5528c676bd5465a6a47a69d30048520"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.017733 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" event={"ID":"bc36657e-ab97-4bc2-90a9-34134794c30b","Type":"ContainerStarted","Data":"738a0c465c7681d3117e081e246d5fb1c67929928afeddc69cb57e3cefd35590"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.021276 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" event={"ID":"68eec877-dde8-4b0b-8e78-53a70af78240","Type":"ContainerStarted","Data":"a904ed2fe1ae04cadf7fa1f248e3323e8e2565c17ed61206b42bc88a45278034"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.021302 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" event={"ID":"68eec877-dde8-4b0b-8e78-53a70af78240","Type":"ContainerStarted","Data":"c82c8c7de5f15c77e6520fefd0d2dd25d19a83b49e20967d5cfea894b741aff5"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.021819 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.026862 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" event={"ID":"e2418db4-0c95-43a9-973e-e2b6c6170198","Type":"ContainerStarted","Data":"ed2c06f694af82342ad61ba63c0e300095fe490554237a8a75c588a7cba6a2ab"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.032407 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-rwrnm" podStartSLOduration=127.032391169 podStartE2EDuration="2m7.032391169s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.006826572 +0000 UTC m=+147.913836041" watchObservedRunningTime="2026-01-30 16:56:51.032391169 +0000 UTC m=+147.939400638" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.033672 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-gfwsl" podStartSLOduration=127.03366562 podStartE2EDuration="2m7.03366562s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.032028051 +0000 UTC m=+147.939037520" watchObservedRunningTime="2026-01-30 16:56:51.03366562 +0000 UTC m=+147.940675089" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.038441 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.039392 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.539377978 +0000 UTC m=+148.446387447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.058478 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerStarted","Data":"2e80d1cd02950c7d480bad14a1a609a4d2ac4caf1c989f6682a73e80934209f5"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.059045 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.071381 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" event={"ID":"fa965c86-cdca-49f6-9652-505d41e07f4e","Type":"ContainerStarted","Data":"dff24556400ba68e118e330eb35ed5555bf7c5085e61569134acfe729d55f69c"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.071426 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" event={"ID":"fa965c86-cdca-49f6-9652-505d41e07f4e","Type":"ContainerStarted","Data":"69acade24cde0adf1cf80e2a04600ca93e6cb3b4540722bbe6e6f805a49fd325"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.099760 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" event={"ID":"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34","Type":"ContainerStarted","Data":"e08e7e3b6c7048825af07cabe437defaa33ec9144b1d809c042d234f5077e3d3"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.113322 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podStartSLOduration=127.113305895 podStartE2EDuration="2m7.113305895s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.112568337 +0000 UTC m=+148.019577806" watchObservedRunningTime="2026-01-30 16:56:51.113305895 +0000 UTC m=+148.020315364" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.113486 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podStartSLOduration=127.113481829 podStartE2EDuration="2m7.113481829s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.08951688 +0000 UTC m=+147.996526349" watchObservedRunningTime="2026-01-30 16:56:51.113481829 +0000 UTC m=+148.020491298" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.128293 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" event={"ID":"9c28e05d-9fe2-414f-bae8-2a8f577af72f","Type":"ContainerStarted","Data":"1b7b1921f436f6733214c53d270efea40a31d3735df876d901fdaa1f878a14c1"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.133725 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" event={"ID":"39f7dec4-e247-4ccb-8c0a-b03a4de346dd","Type":"ContainerStarted","Data":"c41a8cfd79775c6e10e5a15f01ed66f231103138e2c6c880692323313ac755a2"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.144614 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.147343 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" podStartSLOduration=127.147328886 podStartE2EDuration="2m7.147328886s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.146401085 +0000 UTC m=+148.053410554" watchObservedRunningTime="2026-01-30 16:56:51.147328886 +0000 UTC m=+148.054338355" Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.148174 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.648159077 +0000 UTC m=+148.555168546 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.166556 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.166640 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.166818 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" event={"ID":"465741f6-d748-4a3a-8584-3aa2a50bcd7c","Type":"ContainerStarted","Data":"0880c8283d785c8e77f89d706f18260b937aedf71c5ef6ddf0169831bbea7ed6"} Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.167351 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v2t5z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.167380 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.167742 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.167762 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.218757 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:51 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:51 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:51 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.218836 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.226350 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" podStartSLOduration=127.226333016 podStartE2EDuration="2m7.226333016s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.225220009 +0000 UTC m=+148.132229498" watchObservedRunningTime="2026-01-30 16:56:51.226333016 +0000 UTC m=+148.133342485" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.250846 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.252610 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.75259274 +0000 UTC m=+148.659602209 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.355003 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" podStartSLOduration=127.354988054 podStartE2EDuration="2m7.354988054s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.28775365 +0000 UTC m=+148.194763119" watchObservedRunningTime="2026-01-30 16:56:51.354988054 +0000 UTC m=+148.261997523" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.356943 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.357007 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.357035 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.357269 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.857259489 +0000 UTC m=+148.764268958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.357414 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.358032 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.363385 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.364896 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.366504 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.391055 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.408147 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" podStartSLOduration=127.408123098 podStartE2EDuration="2m7.408123098s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.396546658 +0000 UTC m=+148.303556127" watchObservedRunningTime="2026-01-30 16:56:51.408123098 +0000 UTC m=+148.315132567" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.409277 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" podStartSLOduration=128.409269555 podStartE2EDuration="2m8.409269555s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:51.356244155 +0000 UTC m=+148.263253624" watchObservedRunningTime="2026-01-30 16:56:51.409269555 +0000 UTC m=+148.316279024" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.459187 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.459472 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:51.959457228 +0000 UTC m=+148.866466697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.562717 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.563163 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.063146443 +0000 UTC m=+148.970155912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.620733 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.634524 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.648944 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.664600 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.664965 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.164950343 +0000 UTC m=+149.071959812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.766463 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.766818 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.266786894 +0000 UTC m=+149.173796363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.867646 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.868029 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.368014449 +0000 UTC m=+149.275023918 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:51 crc kubenswrapper[4712]: I0130 16:56:51.968843 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:51 crc kubenswrapper[4712]: E0130 16:56:51.969273 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.469256125 +0000 UTC m=+149.376265594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.072503 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.072631 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.572609333 +0000 UTC m=+149.479618802 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.072841 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.073141 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.573130976 +0000 UTC m=+149.480140445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.173359 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.173591 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.673564532 +0000 UTC m=+149.580574001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.187753 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" event={"ID":"fa965c86-cdca-49f6-9652-505d41e07f4e","Type":"ContainerStarted","Data":"f87674a809972319dbaff33dd0df7afc10e657bd2d0c1aadf13a3917087691d6"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.221008 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:52 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:52 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:52 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.221059 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.223186 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56p67" event={"ID":"01eaec98-2b0a-46a5-a9fe-d2a01d486723","Type":"ContainerStarted","Data":"00ebed63fcd8ab2c4cc76302d1e95c6312f33748b59221075b7e19d6983bbb94"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.223218 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-56p67" event={"ID":"01eaec98-2b0a-46a5-a9fe-d2a01d486723","Type":"ContainerStarted","Data":"577a2b568812b9efe5d9f656dfec852fd9847da92e64e32febeaea6d22536180"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.248360 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-z5dx7" event={"ID":"465741f6-d748-4a3a-8584-3aa2a50bcd7c","Type":"ContainerStarted","Data":"5e2e444a091c9ea7b4ac3051b19fde22589a9d29d021fc6e7cfa1e0552b98bd9"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.267554 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" event={"ID":"d631ea54-82a0-4985-bfe7-776d4764e85e","Type":"ContainerStarted","Data":"38be111327957853ff03bbf7262c46fdcf0c0202d96f0f6c865ef2af1fb9d1f1"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.270471 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8fqqs" event={"ID":"5e966532-0698-481e-99b3-d1de70be4ecf","Type":"ContainerStarted","Data":"76815b0fd86d67d713c8b399a2ca7f6024ae2dec46dfb5c95cf45554c36aad2b"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.271082 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8fqqs" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.278115 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" event={"ID":"bc36657e-ab97-4bc2-90a9-34134794c30b","Type":"ContainerStarted","Data":"8d551a56953e47b408acf1f9ea24c7bd45972b3936d41de57e1658191a16e664"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.278605 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.278882 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.778870776 +0000 UTC m=+149.685880245 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.288973 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-6njcq" event={"ID":"9c28e05d-9fe2-414f-bae8-2a8f577af72f","Type":"ContainerStarted","Data":"78f87faaa8ff5254ef098763dcdea0c976007fa094eb4082ca2b9cbcd4e63b76"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.299021 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-bd8m7" podStartSLOduration=128.299004143 podStartE2EDuration="2m8.299004143s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:52.234167967 +0000 UTC m=+149.141177436" watchObservedRunningTime="2026-01-30 16:56:52.299004143 +0000 UTC m=+149.206013612" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.324066 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" event={"ID":"e2418db4-0c95-43a9-973e-e2b6c6170198","Type":"ContainerStarted","Data":"d5c01d90004992c6c7c4df162147944b7f11bfb5037ff2b09933803e6279dd0c"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.327250 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-56p67" podStartSLOduration=129.327238135 podStartE2EDuration="2m9.327238135s" podCreationTimestamp="2026-01-30 16:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:52.30133652 +0000 UTC m=+149.208345989" watchObservedRunningTime="2026-01-30 16:56:52.327238135 +0000 UTC m=+149.234247604" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.331012 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8fqqs" podStartSLOduration=9.331001486 podStartE2EDuration="9.331001486s" podCreationTimestamp="2026-01-30 16:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:52.326387454 +0000 UTC m=+149.233396923" watchObservedRunningTime="2026-01-30 16:56:52.331001486 +0000 UTC m=+149.238010955" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.339433 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-sbrnm" event={"ID":"39f7dec4-e247-4ccb-8c0a-b03a4de346dd","Type":"ContainerStarted","Data":"c8c5ec888231ebbbc56a9b6ee8959feba92ac5b304eefe4601b63129abd3c030"} Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.341969 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-v2t5z container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" start-of-body= Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.342026 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.42:8080/healthz\": dial tcp 10.217.0.42:8080: connect: connection refused" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.343135 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.343182 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.351515 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-xjs5m" podStartSLOduration=128.351500401 podStartE2EDuration="2m8.351500401s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:52.351351657 +0000 UTC m=+149.258361126" watchObservedRunningTime="2026-01-30 16:56:52.351500401 +0000 UTC m=+149.258509870" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.358230 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.381083 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.382104 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.88208928 +0000 UTC m=+149.789098749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.399604 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-g4p8m" podStartSLOduration=128.399588003 podStartE2EDuration="2m8.399588003s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:52.398336613 +0000 UTC m=+149.305346082" watchObservedRunningTime="2026-01-30 16:56:52.399588003 +0000 UTC m=+149.306597472" Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.485012 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.486478 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:52.986466552 +0000 UTC m=+149.893476031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.587365 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.587652 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.087637257 +0000 UTC m=+149.994646726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.688279 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.688542 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.188530684 +0000 UTC m=+150.095540153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: W0130 16:56:52.739472 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-48b8a907740f66fded9f4cec92a8187643880931bbf3356231200820aacfd7fd WatchSource:0}: Error finding container 48b8a907740f66fded9f4cec92a8187643880931bbf3356231200820aacfd7fd: Status 404 returned error can't find the container with id 48b8a907740f66fded9f4cec92a8187643880931bbf3356231200820aacfd7fd Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.789480 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.789698 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.289633237 +0000 UTC m=+150.196642716 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.789744 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.790141 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.29013176 +0000 UTC m=+150.197141229 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.890910 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.891058 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.391030308 +0000 UTC m=+150.298039767 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.891327 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.891676 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.391667412 +0000 UTC m=+150.298676881 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:52 crc kubenswrapper[4712]: W0130 16:56:52.910535 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-67cf5a2bb72fe481730c5b7e2eeecf9371477438d7faa21d4a3da36360393da6 WatchSource:0}: Error finding container 67cf5a2bb72fe481730c5b7e2eeecf9371477438d7faa21d4a3da36360393da6: Status 404 returned error can't find the container with id 67cf5a2bb72fe481730c5b7e2eeecf9371477438d7faa21d4a3da36360393da6 Jan 30 16:56:52 crc kubenswrapper[4712]: I0130 16:56:52.992190 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:52 crc kubenswrapper[4712]: E0130 16:56:52.992561 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.49254753 +0000 UTC m=+150.399556989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.093821 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.094151 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.594135975 +0000 UTC m=+150.501145444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.195281 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.195752 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.695419322 +0000 UTC m=+150.602428791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.196082 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.196464 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.696456347 +0000 UTC m=+150.603465816 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.218742 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:53 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:53 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:53 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.218870 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.296940 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.297365 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.797345955 +0000 UTC m=+150.704355424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.308910 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcfwq"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.309820 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.314351 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.332546 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcfwq"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.396699 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"eb50acc4801356b7bcc616b1fcf24be69814298a3287cc204b9f4f610edfa5cb"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.396744 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"67cf5a2bb72fe481730c5b7e2eeecf9371477438d7faa21d4a3da36360393da6"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.398855 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-utilities\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.398891 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-catalog-content\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.398912 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ms5bj\" (UniqueName: \"kubernetes.io/projected/41581f8f-2b7b-4a20-9f3b-a28c0914b093-kube-api-access-ms5bj\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.398943 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.399169 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:53.899158975 +0000 UTC m=+150.806168444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.413656 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9d862703dc3998a70da5417a85cf3b9fbda86af0c6656c6dfb8e153c439d1f7c"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.413701 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"48b8a907740f66fded9f4cec92a8187643880931bbf3356231200820aacfd7fd"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.413878 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.429176 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" event={"ID":"d631ea54-82a0-4985-bfe7-776d4764e85e","Type":"ContainerStarted","Data":"6ea48a0aff80d656554fb694ff1d87338c8f524316bd6818ee43e664aeb01ac4"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.442159 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"b6394527af42f1b154695426774dde54b677de438ba1d02487db0338872c4549"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.442195 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7b6b8f884a219894a36762b968455e02ec1d0a7b60cfbb5bc142166850ff323b"} Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.457047 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dlkwf"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.459686 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.467119 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.478117 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlkwf"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.500281 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.500483 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-utilities\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.500511 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-catalog-content\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.500532 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ms5bj\" (UniqueName: \"kubernetes.io/projected/41581f8f-2b7b-4a20-9f3b-a28c0914b093-kube-api-access-ms5bj\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.501177 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-utilities\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.501294 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.001277582 +0000 UTC m=+150.908287041 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.503019 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-catalog-content\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.564162 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ms5bj\" (UniqueName: \"kubernetes.io/projected/41581f8f-2b7b-4a20-9f3b-a28c0914b093-kube-api-access-ms5bj\") pod \"certified-operators-qcfwq\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.601500 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-catalog-content\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.601545 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5x8s\" (UniqueName: \"kubernetes.io/projected/be58da2a-7470-403f-a094-ca2bac2dbccd-kube-api-access-x5x8s\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.601718 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-utilities\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.601999 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.604231 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.104215789 +0000 UTC m=+151.011225258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.628300 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.664265 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4f5w5"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.665205 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.679128 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.679502 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.680279 4712 patch_prober.go:28] interesting pod/console-f9d7485db-jx2s9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.680318 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jx2s9" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.688966 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4f5w5"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.704996 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.705234 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.205211359 +0000 UTC m=+151.112220838 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.705342 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.705395 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-catalog-content\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.705417 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5x8s\" (UniqueName: \"kubernetes.io/projected/be58da2a-7470-403f-a094-ca2bac2dbccd-kube-api-access-x5x8s\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.705448 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-utilities\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.706202 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-utilities\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.706218 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-catalog-content\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.706478 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.206465259 +0000 UTC m=+151.113474728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.755910 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5x8s\" (UniqueName: \"kubernetes.io/projected/be58da2a-7470-403f-a094-ca2bac2dbccd-kube-api-access-x5x8s\") pod \"community-operators-dlkwf\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.790021 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.808275 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.808472 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-catalog-content\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.808535 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rk9t6\" (UniqueName: \"kubernetes.io/projected/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-kube-api-access-rk9t6\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.808587 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-utilities\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.808672 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.308658179 +0000 UTC m=+151.215667648 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.862416 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-r9tqz"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.875090 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.896746 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r9tqz"] Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.911007 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-utilities\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.911520 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-catalog-content\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.911664 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.912634 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-utilities\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.915225 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rk9t6\" (UniqueName: \"kubernetes.io/projected/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-kube-api-access-rk9t6\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:53 crc kubenswrapper[4712]: E0130 16:56:53.915905 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.41588135 +0000 UTC m=+151.322890819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:53 crc kubenswrapper[4712]: I0130 16:56:53.932421 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-catalog-content\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.003779 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rk9t6\" (UniqueName: \"kubernetes.io/projected/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-kube-api-access-rk9t6\") pod \"certified-operators-4f5w5\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.016295 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.016549 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-catalog-content\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.016604 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.516565032 +0000 UTC m=+151.423574511 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.016715 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-utilities\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.016870 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.016980 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89mk9\" (UniqueName: \"kubernetes.io/projected/f329fa29-ce56-44e1-9384-0347dbc67c55-kube-api-access-89mk9\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.017303 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.51729388 +0000 UTC m=+151.424303349 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.118327 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.118769 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89mk9\" (UniqueName: \"kubernetes.io/projected/f329fa29-ce56-44e1-9384-0347dbc67c55-kube-api-access-89mk9\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.118817 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-catalog-content\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.118845 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-utilities\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.119262 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.619246533 +0000 UTC m=+151.526256002 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.119266 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-utilities\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.119443 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-catalog-content\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.155070 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89mk9\" (UniqueName: \"kubernetes.io/projected/f329fa29-ce56-44e1-9384-0347dbc67c55-kube-api-access-89mk9\") pod \"community-operators-r9tqz\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.209583 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcfwq"] Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.212315 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:54 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:54 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:54 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.212358 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.222045 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.222368 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.722356925 +0000 UTC m=+151.629366394 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.244411 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.299534 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.325146 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.325340 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.825309432 +0000 UTC m=+151.732318901 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.325420 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.325849 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.825833654 +0000 UTC m=+151.732843143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.399973 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dlkwf"] Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.426377 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.426764 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:54.926745073 +0000 UTC m=+151.833754552 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.472424 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlkwf" event={"ID":"be58da2a-7470-403f-a094-ca2bac2dbccd","Type":"ContainerStarted","Data":"0515a6a8677c10d8232565e8b28a7293a456246298199d83ac9da1863e872115"} Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.474067 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" event={"ID":"d631ea54-82a0-4985-bfe7-776d4764e85e","Type":"ContainerStarted","Data":"12df50443990ac2ebadb0a679e170a8198107ebbc1472e1c82468bff8f5aff2a"} Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.475460 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcfwq" event={"ID":"41581f8f-2b7b-4a20-9f3b-a28c0914b093","Type":"ContainerStarted","Data":"d1c080093a9151abd0f023c056e6ccd843867a5d01a84b910afad2ac0302aa9b"} Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.527521 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.527909 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.027895017 +0000 UTC m=+151.934904486 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.629260 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.629482 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.12946087 +0000 UTC m=+152.036470339 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.629740 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.631007 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.130999957 +0000 UTC m=+152.038009426 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.730570 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.735223 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.235167325 +0000 UTC m=+152.142176864 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: W0130 16:56:54.735335 4712 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe58da2a_7470_403f_a094_ca2bac2dbccd.slice/crio-f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a.scope/pids.max": read /sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe58da2a_7470_403f_a094_ca2bac2dbccd.slice/crio-f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a.scope/pids.max: no such device Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.788457 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe58da2a_7470_403f_a094_ca2bac2dbccd.slice/crio-f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a.scope\": RecentStats: unable to find data in memory cache]" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.832915 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.833227 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.333215484 +0000 UTC m=+152.240224953 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.862003 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.935288 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:54 crc kubenswrapper[4712]: E0130 16:56:54.935946 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.435917465 +0000 UTC m=+152.342926934 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.963050 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-r9tqz"] Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.988093 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.988693 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.992014 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 30 16:56:54 crc kubenswrapper[4712]: I0130 16:56:54.992152 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.006046 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.041454 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.042031 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.542019659 +0000 UTC m=+152.449029128 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.047375 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4f5w5"] Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.143156 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.143308 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.643286715 +0000 UTC m=+152.550296194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.143348 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b291d3-564e-4820-ab91-508170776782-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.143510 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.143535 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b291d3-564e-4820-ab91-508170776782-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.143873 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.6438596 +0000 UTC m=+152.550869069 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.203306 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:55 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:55 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:55 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.203365 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.244891 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.245104 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.745080645 +0000 UTC m=+152.652090114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.245221 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b291d3-564e-4820-ab91-508170776782-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.245344 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b291d3-564e-4820-ab91-508170776782-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.245404 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.245422 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b291d3-564e-4820-ab91-508170776782-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.245688 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.745679809 +0000 UTC m=+152.652689278 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.258470 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jmc9f"] Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.261286 4712 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.265203 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.266468 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b291d3-564e-4820-ab91-508170776782-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.270714 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.284176 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmc9f"] Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.318034 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.346859 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.347041 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swznj\" (UniqueName: \"kubernetes.io/projected/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-kube-api-access-swznj\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.347093 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-utilities\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.347166 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-catalog-content\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.347297 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.847280524 +0000 UTC m=+152.754289993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.448150 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.448385 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-catalog-content\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.448433 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-swznj\" (UniqueName: \"kubernetes.io/projected/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-kube-api-access-swznj\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.448463 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-utilities\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.449072 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-utilities\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.449283 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-catalog-content\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.449711 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:55.949700589 +0000 UTC m=+152.856710058 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.481836 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-swznj\" (UniqueName: \"kubernetes.io/projected/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-kube-api-access-swznj\") pod \"redhat-marketplace-jmc9f\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.530662 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f5w5" event={"ID":"fda2fdd1-0c89-4398-8e0a-545311fe5ae9","Type":"ContainerStarted","Data":"b4f96ff36261969d5e1744037152cb5bda934c47d381c8e575261b8ae9c7a832"} Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.549207 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.549329 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.049309015 +0000 UTC m=+152.956318484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.549462 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.549776 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.049766127 +0000 UTC m=+152.956775596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.552955 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9tqz" event={"ID":"f329fa29-ce56-44e1-9384-0347dbc67c55","Type":"ContainerStarted","Data":"1ec90d22456bb0c69513a53bd3db2d010c0bf5e4bd65ad667822ade915d33127"} Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.557779 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.559155 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.559946 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.559975 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.563708 4712 generic.go:334] "Generic (PLEG): container finished" podID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerID="f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a" exitCode=0 Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.563841 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlkwf" event={"ID":"be58da2a-7470-403f-a094-ca2bac2dbccd","Type":"ContainerDied","Data":"f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a"} Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.571780 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.578865 4712 generic.go:334] "Generic (PLEG): container finished" podID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerID="7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6" exitCode=0 Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.578899 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcfwq" event={"ID":"41581f8f-2b7b-4a20-9f3b-a28c0914b093","Type":"ContainerDied","Data":"7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6"} Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.589699 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.651404 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.652312 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.152294734 +0000 UTC m=+153.059304203 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.654686 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l4hp7"] Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.655657 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.672532 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4hp7"] Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.754714 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-catalog-content\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.754762 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gt64c\" (UniqueName: \"kubernetes.io/projected/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-kube-api-access-gt64c\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.754781 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-utilities\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.754830 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.755127 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.255116068 +0000 UTC m=+153.162125537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.758273 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 30 16:56:55 crc kubenswrapper[4712]: W0130 16:56:55.792944 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda3b291d3_564e_4820_ab91_508170776782.slice/crio-920ff90b6e7e6566935fa0a95f1b32720c777d85f8a2e139228690b170ecc4a5 WatchSource:0}: Error finding container 920ff90b6e7e6566935fa0a95f1b32720c777d85f8a2e139228690b170ecc4a5: Status 404 returned error can't find the container with id 920ff90b6e7e6566935fa0a95f1b32720c777d85f8a2e139228690b170ecc4a5 Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.860366 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.860557 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-catalog-content\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.860604 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gt64c\" (UniqueName: \"kubernetes.io/projected/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-kube-api-access-gt64c\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.860630 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-utilities\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.861638 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-catalog-content\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.862164 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.362146805 +0000 UTC m=+153.269156274 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.865263 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-utilities\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.895987 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.896028 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.910392 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gt64c\" (UniqueName: \"kubernetes.io/projected/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-kube-api-access-gt64c\") pod \"redhat-marketplace-l4hp7\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.919763 4712 patch_prober.go:28] interesting pod/apiserver-76f77b778f-56p67 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]log ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]etcd ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/generic-apiserver-start-informers ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/max-in-flight-filter ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 30 16:56:55 crc kubenswrapper[4712]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 30 16:56:55 crc kubenswrapper[4712]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/project.openshift.io-projectcache ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/openshift.io-startinformers ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 30 16:56:55 crc kubenswrapper[4712]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 30 16:56:55 crc kubenswrapper[4712]: livez check failed Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.919833 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-56p67" podUID="01eaec98-2b0a-46a5-a9fe-d2a01d486723" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:55 crc kubenswrapper[4712]: I0130 16:56:55.964624 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:55 crc kubenswrapper[4712]: E0130 16:56:55.965305 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.465290006 +0000 UTC m=+153.372299475 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.005706 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.032258 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.052039 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.071533 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:56 crc kubenswrapper[4712]: E0130 16:56:56.075919 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.575896028 +0000 UTC m=+153.482905497 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.082348 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:56 crc kubenswrapper[4712]: E0130 16:56:56.082765 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.582752894 +0000 UTC m=+153.489762363 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.159004 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmc9f"] Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.184440 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:56 crc kubenswrapper[4712]: E0130 16:56:56.184587 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.684562494 +0000 UTC m=+153.591571963 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.184750 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:56 crc kubenswrapper[4712]: E0130 16:56:56.186135 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-30 16:56:56.686123712 +0000 UTC m=+153.593133181 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-ddc2j" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.194877 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.201757 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:56 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:56 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:56 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.201835 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.246371 4712 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-30T16:56:55.261394129Z","Handler":null,"Name":""} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.287500 4712 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.287532 4712 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.288458 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.334639 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.356705 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.392464 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.407746 4712 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.407788 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.445454 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pz9vb"] Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.465243 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.474855 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.485770 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pz9vb"] Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.552609 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4hp7"] Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.597032 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-utilities\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.597074 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nsts\" (UniqueName: \"kubernetes.io/projected/b1773095-5051-4668-ae41-1d6c41c43a43-kube-api-access-5nsts\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.601313 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3b291d3-564e-4820-ab91-508170776782","Type":"ContainerStarted","Data":"f45480578d40b51078205d16559a33f2fd4c3c44eb13e8af50324710704d3b58"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.601369 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3b291d3-564e-4820-ab91-508170776782","Type":"ContainerStarted","Data":"920ff90b6e7e6566935fa0a95f1b32720c777d85f8a2e139228690b170ecc4a5"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.604048 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-catalog-content\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.605510 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4hp7" event={"ID":"1efcd5ba-0391-4427-aaa0-9cef2b10a48c","Type":"ContainerStarted","Data":"2bf10f102e2e4d318ff9ce6a799f3bd507f16aa4b8078672b8ebb15e4152a8d5"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.614932 4712 generic.go:334] "Generic (PLEG): container finished" podID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerID="7b3fa34cdb2d09333e616c13e38233606086220b5db4e12aa76b3f9d77a3c16b" exitCode=0 Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.615012 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f5w5" event={"ID":"fda2fdd1-0c89-4398-8e0a-545311fe5ae9","Type":"ContainerDied","Data":"7b3fa34cdb2d09333e616c13e38233606086220b5db4e12aa76b3f9d77a3c16b"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.626548 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.626531773 podStartE2EDuration="2.626531773s" podCreationTimestamp="2026-01-30 16:56:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:56.620309662 +0000 UTC m=+153.527319131" watchObservedRunningTime="2026-01-30 16:56:56.626531773 +0000 UTC m=+153.533541242" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.626611 4712 generic.go:334] "Generic (PLEG): container finished" podID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerID="c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7" exitCode=0 Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.626710 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9tqz" event={"ID":"f329fa29-ce56-44e1-9384-0347dbc67c55","Type":"ContainerDied","Data":"c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.628663 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-ddc2j\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.654763 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerStarted","Data":"c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.654825 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerStarted","Data":"d9f4391c7c62d83081571bba9de5f2dcd5bbd6a5f62f5738232b16cb833f7983"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.664395 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" event={"ID":"d631ea54-82a0-4985-bfe7-776d4764e85e","Type":"ContainerStarted","Data":"ed99c856836d64de38440131a12ed37f0c691a2edbbec2532e6b0a4be9b99048"} Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.675839 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.706506 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-catalog-content\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.706658 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-utilities\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.706716 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nsts\" (UniqueName: \"kubernetes.io/projected/b1773095-5051-4668-ae41-1d6c41c43a43-kube-api-access-5nsts\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.707663 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-utilities\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.707954 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-catalog-content\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.732668 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" podStartSLOduration=13.732647076 podStartE2EDuration="13.732647076s" podCreationTimestamp="2026-01-30 16:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:56.727937883 +0000 UTC m=+153.634947352" watchObservedRunningTime="2026-01-30 16:56:56.732647076 +0000 UTC m=+153.639656545" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.736575 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nsts\" (UniqueName: \"kubernetes.io/projected/b1773095-5051-4668-ae41-1d6c41c43a43-kube-api-access-5nsts\") pod \"redhat-operators-pz9vb\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.811419 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.863926 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hzqrq"] Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.864953 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.871232 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzqrq"] Jan 30 16:56:56 crc kubenswrapper[4712]: I0130 16:56:56.996323 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddc2j"] Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.009926 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-utilities\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.009963 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmg4t\" (UniqueName: \"kubernetes.io/projected/fc1192c4-3b0c-4421-8e71-17e8731ffe34-kube-api-access-tmg4t\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.010020 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-catalog-content\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: W0130 16:56:57.012694 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42e31bd2_5a3c_4c3b_83bf_8e85b9a0f3b5.slice/crio-7e1a135128dcf0fed21bf5a5482d5b3bc720860f1e68eab1a0ac119e14adbf7e WatchSource:0}: Error finding container 7e1a135128dcf0fed21bf5a5482d5b3bc720860f1e68eab1a0ac119e14adbf7e: Status 404 returned error can't find the container with id 7e1a135128dcf0fed21bf5a5482d5b3bc720860f1e68eab1a0ac119e14adbf7e Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.111137 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-catalog-content\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.111521 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-utilities\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.111542 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmg4t\" (UniqueName: \"kubernetes.io/projected/fc1192c4-3b0c-4421-8e71-17e8731ffe34-kube-api-access-tmg4t\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.111882 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-catalog-content\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.112075 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-utilities\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.139401 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmg4t\" (UniqueName: \"kubernetes.io/projected/fc1192c4-3b0c-4421-8e71-17e8731ffe34-kube-api-access-tmg4t\") pod \"redhat-operators-hzqrq\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.143742 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pz9vb"] Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.198981 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:57 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:57 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:57 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.199024 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.201533 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.465646 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hzqrq"] Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.699474 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" event={"ID":"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5","Type":"ContainerStarted","Data":"57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.699528 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" event={"ID":"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5","Type":"ContainerStarted","Data":"7e1a135128dcf0fed21bf5a5482d5b3bc720860f1e68eab1a0ac119e14adbf7e"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.700437 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.712859 4712 generic.go:334] "Generic (PLEG): container finished" podID="a3b291d3-564e-4820-ab91-508170776782" containerID="f45480578d40b51078205d16559a33f2fd4c3c44eb13e8af50324710704d3b58" exitCode=0 Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.712973 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3b291d3-564e-4820-ab91-508170776782","Type":"ContainerDied","Data":"f45480578d40b51078205d16559a33f2fd4c3c44eb13e8af50324710704d3b58"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.720283 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" podStartSLOduration=133.72026636 podStartE2EDuration="2m13.72026636s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:56:57.719142502 +0000 UTC m=+154.626151981" watchObservedRunningTime="2026-01-30 16:56:57.72026636 +0000 UTC m=+154.627275839" Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.724503 4712 generic.go:334] "Generic (PLEG): container finished" podID="b1773095-5051-4668-ae41-1d6c41c43a43" containerID="8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0" exitCode=0 Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.724590 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerDied","Data":"8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.724609 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerStarted","Data":"918b2382987759542552b98e65cbb9d1a69f240f857ed6e1e5539ef2d1cd4d60"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.753277 4712 generic.go:334] "Generic (PLEG): container finished" podID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerID="8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25" exitCode=0 Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.753445 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4hp7" event={"ID":"1efcd5ba-0391-4427-aaa0-9cef2b10a48c","Type":"ContainerDied","Data":"8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.779509 4712 generic.go:334] "Generic (PLEG): container finished" podID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerID="c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2" exitCode=0 Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.779592 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerDied","Data":"c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.788309 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerStarted","Data":"9ce6f8da3abf3812c2cba3fa53e19c1e54283ad5df15a1a57eb6b66d70bb109e"} Jan 30 16:56:57 crc kubenswrapper[4712]: I0130 16:56:57.815116 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 30 16:56:58 crc kubenswrapper[4712]: I0130 16:56:58.198655 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:58 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:58 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:58 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:58 crc kubenswrapper[4712]: I0130 16:56:58.198920 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:58 crc kubenswrapper[4712]: I0130 16:56:58.821071 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerDied","Data":"2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44"} Jan 30 16:56:58 crc kubenswrapper[4712]: I0130 16:56:58.821738 4712 generic.go:334] "Generic (PLEG): container finished" podID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerID="2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44" exitCode=0 Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.148575 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.209324 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:56:59 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:56:59 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:56:59 crc kubenswrapper[4712]: healthz check failed Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.209394 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.247254 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b291d3-564e-4820-ab91-508170776782-kubelet-dir\") pod \"a3b291d3-564e-4820-ab91-508170776782\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.247343 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b291d3-564e-4820-ab91-508170776782-kube-api-access\") pod \"a3b291d3-564e-4820-ab91-508170776782\" (UID: \"a3b291d3-564e-4820-ab91-508170776782\") " Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.247790 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3b291d3-564e-4820-ab91-508170776782-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3b291d3-564e-4820-ab91-508170776782" (UID: "a3b291d3-564e-4820-ab91-508170776782"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.266819 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b291d3-564e-4820-ab91-508170776782-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3b291d3-564e-4820-ab91-508170776782" (UID: "a3b291d3-564e-4820-ab91-508170776782"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.348977 4712 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3b291d3-564e-4820-ab91-508170776782-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.349014 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3b291d3-564e-4820-ab91-508170776782-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.827304 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3b291d3-564e-4820-ab91-508170776782","Type":"ContainerDied","Data":"920ff90b6e7e6566935fa0a95f1b32720c777d85f8a2e139228690b170ecc4a5"} Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.827621 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="920ff90b6e7e6566935fa0a95f1b32720c777d85f8a2e139228690b170ecc4a5" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.827391 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.833065 4712 generic.go:334] "Generic (PLEG): container finished" podID="4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" containerID="e08e7e3b6c7048825af07cabe437defaa33ec9144b1d809c042d234f5077e3d3" exitCode=0 Jan 30 16:56:59 crc kubenswrapper[4712]: I0130 16:56:59.834283 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" event={"ID":"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34","Type":"ContainerDied","Data":"e08e7e3b6c7048825af07cabe437defaa33ec9144b1d809c042d234f5077e3d3"} Jan 30 16:57:00 crc kubenswrapper[4712]: I0130 16:57:00.198971 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:00 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:00 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:00 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:00 crc kubenswrapper[4712]: I0130 16:57:00.199042 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:00 crc kubenswrapper[4712]: I0130 16:57:00.898553 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:57:00 crc kubenswrapper[4712]: I0130 16:57:00.903185 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-56p67" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.021145 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:57:01 crc kubenswrapper[4712]: E0130 16:57:01.021351 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3b291d3-564e-4820-ab91-508170776782" containerName="pruner" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.021362 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b291d3-564e-4820-ab91-508170776782" containerName="pruner" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.021456 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3b291d3-564e-4820-ab91-508170776782" containerName="pruner" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.021809 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.026636 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.026898 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.032320 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.085274 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.085394 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.188520 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.188608 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.188972 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.197848 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:01 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:01 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:01 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.197904 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.207442 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.327766 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.350914 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.389940 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-secret-volume\") pod \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.390037 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h69s\" (UniqueName: \"kubernetes.io/projected/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-kube-api-access-8h69s\") pod \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.390484 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-config-volume\") pod \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\" (UID: \"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34\") " Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.394968 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-config-volume" (OuterVolumeSpecName: "config-volume") pod "4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" (UID: "4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.397644 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" (UID: "4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.398219 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-kube-api-access-8h69s" (OuterVolumeSpecName: "kube-api-access-8h69s") pod "4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" (UID: "4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34"). InnerVolumeSpecName "kube-api-access-8h69s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.443174 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8fqqs" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.496626 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.496665 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h69s\" (UniqueName: \"kubernetes.io/projected/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-kube-api-access-8h69s\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.496676 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.634741 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 30 16:57:01 crc kubenswrapper[4712]: W0130 16:57:01.659151 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0437cc32_fa1c_4c43_bce9_9ebe22d278f5.slice/crio-65501d42ef554941975737fe13723897c91f59746bcab6e80c5f8efafbd91d74 WatchSource:0}: Error finding container 65501d42ef554941975737fe13723897c91f59746bcab6e80c5f8efafbd91d74: Status 404 returned error can't find the container with id 65501d42ef554941975737fe13723897c91f59746bcab6e80c5f8efafbd91d74 Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.860052 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0437cc32-fa1c-4c43-bce9-9ebe22d278f5","Type":"ContainerStarted","Data":"65501d42ef554941975737fe13723897c91f59746bcab6e80c5f8efafbd91d74"} Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.861911 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" event={"ID":"4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34","Type":"ContainerDied","Data":"9775540a8c538d98bf635a433c1532cab51b9708e3a5c7c3828b6195788c1b4c"} Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.861941 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9775540a8c538d98bf635a433c1532cab51b9708e3a5c7c3828b6195788c1b4c" Jan 30 16:57:01 crc kubenswrapper[4712]: I0130 16:57:01.862334 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm" Jan 30 16:57:02 crc kubenswrapper[4712]: I0130 16:57:02.199718 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:02 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:02 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:02 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:02 crc kubenswrapper[4712]: I0130 16:57:02.199833 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:02 crc kubenswrapper[4712]: I0130 16:57:02.884083 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0437cc32-fa1c-4c43-bce9-9ebe22d278f5","Type":"ContainerStarted","Data":"b925cde294ed8353206c6479402f21df8bf0d538fe021a8635a7b62bb068ff85"} Jan 30 16:57:02 crc kubenswrapper[4712]: I0130 16:57:02.901266 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.90123058 podStartE2EDuration="2.90123058s" podCreationTimestamp="2026-01-30 16:57:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:57:02.900044251 +0000 UTC m=+159.807053720" watchObservedRunningTime="2026-01-30 16:57:02.90123058 +0000 UTC m=+159.808240049" Jan 30 16:57:03 crc kubenswrapper[4712]: I0130 16:57:03.206217 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:03 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:03 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:03 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:03 crc kubenswrapper[4712]: I0130 16:57:03.206275 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:03 crc kubenswrapper[4712]: I0130 16:57:03.679023 4712 patch_prober.go:28] interesting pod/console-f9d7485db-jx2s9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" start-of-body= Jan 30 16:57:03 crc kubenswrapper[4712]: I0130 16:57:03.679381 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-jx2s9" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerName="console" probeResult="failure" output="Get \"https://10.217.0.13:8443/health\": dial tcp 10.217.0.13:8443: connect: connection refused" Jan 30 16:57:03 crc kubenswrapper[4712]: I0130 16:57:03.894441 4712 generic.go:334] "Generic (PLEG): container finished" podID="0437cc32-fa1c-4c43-bce9-9ebe22d278f5" containerID="b925cde294ed8353206c6479402f21df8bf0d538fe021a8635a7b62bb068ff85" exitCode=0 Jan 30 16:57:03 crc kubenswrapper[4712]: I0130 16:57:03.894489 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0437cc32-fa1c-4c43-bce9-9ebe22d278f5","Type":"ContainerDied","Data":"b925cde294ed8353206c6479402f21df8bf0d538fe021a8635a7b62bb068ff85"} Jan 30 16:57:04 crc kubenswrapper[4712]: I0130 16:57:04.200356 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:04 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:04 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:04 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:04 crc kubenswrapper[4712]: I0130 16:57:04.200421 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:05 crc kubenswrapper[4712]: I0130 16:57:05.204300 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:05 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:05 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:05 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:05 crc kubenswrapper[4712]: I0130 16:57:05.204737 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:05 crc kubenswrapper[4712]: I0130 16:57:05.555906 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:05 crc kubenswrapper[4712]: I0130 16:57:05.555965 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:05 crc kubenswrapper[4712]: I0130 16:57:05.556888 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:05 crc kubenswrapper[4712]: I0130 16:57:05.556983 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.202177 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:06 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:06 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:06 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.202228 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.271608 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.271665 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.525579 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.531331 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/abacbc6e-6514-4db6-80b5-23570952c86f-metrics-certs\") pod \"network-metrics-daemon-lpb6h\" (UID: \"abacbc6e-6514-4db6-80b5-23570952c86f\") " pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:57:06 crc kubenswrapper[4712]: I0130 16:57:06.721930 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-lpb6h" Jan 30 16:57:07 crc kubenswrapper[4712]: I0130 16:57:07.198655 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:07 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:07 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:07 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:07 crc kubenswrapper[4712]: I0130 16:57:07.198708 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:08 crc kubenswrapper[4712]: I0130 16:57:08.198872 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:08 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:08 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:08 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:08 crc kubenswrapper[4712]: I0130 16:57:08.198924 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:09 crc kubenswrapper[4712]: I0130 16:57:09.198183 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 30 16:57:09 crc kubenswrapper[4712]: [-]has-synced failed: reason withheld Jan 30 16:57:09 crc kubenswrapper[4712]: [+]process-running ok Jan 30 16:57:09 crc kubenswrapper[4712]: healthz check failed Jan 30 16:57:09 crc kubenswrapper[4712]: I0130 16:57:09.198244 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 16:57:10 crc kubenswrapper[4712]: I0130 16:57:10.197709 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:57:10 crc kubenswrapper[4712]: I0130 16:57:10.202281 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.532512 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.615735 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kubelet-dir\") pod \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.615870 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0437cc32-fa1c-4c43-bce9-9ebe22d278f5" (UID: "0437cc32-fa1c-4c43-bce9-9ebe22d278f5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.615952 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kube-api-access\") pod \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\" (UID: \"0437cc32-fa1c-4c43-bce9-9ebe22d278f5\") " Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.616203 4712 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.622174 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0437cc32-fa1c-4c43-bce9-9ebe22d278f5" (UID: "0437cc32-fa1c-4c43-bce9-9ebe22d278f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:57:12 crc kubenswrapper[4712]: I0130 16:57:12.717706 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0437cc32-fa1c-4c43-bce9-9ebe22d278f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:57:13 crc kubenswrapper[4712]: I0130 16:57:13.004199 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"0437cc32-fa1c-4c43-bce9-9ebe22d278f5","Type":"ContainerDied","Data":"65501d42ef554941975737fe13723897c91f59746bcab6e80c5f8efafbd91d74"} Jan 30 16:57:13 crc kubenswrapper[4712]: I0130 16:57:13.004490 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65501d42ef554941975737fe13723897c91f59746bcab6e80c5f8efafbd91d74" Jan 30 16:57:13 crc kubenswrapper[4712]: I0130 16:57:13.004230 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 30 16:57:13 crc kubenswrapper[4712]: I0130 16:57:13.688782 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:57:13 crc kubenswrapper[4712]: I0130 16:57:13.698432 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.555430 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.555634 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.555936 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.555993 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.556102 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.556880 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.556922 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.556998 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"4b1b72476b9e51b2129fe5dc1b953fd12a6e7bd7ae8b55ec86e9151d98b57eaf"} pod="openshift-console/downloads-7954f5f757-27wq6" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 30 16:57:15 crc kubenswrapper[4712]: I0130 16:57:15.557153 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" containerID="cri-o://4b1b72476b9e51b2129fe5dc1b953fd12a6e7bd7ae8b55ec86e9151d98b57eaf" gracePeriod=2 Jan 30 16:57:16 crc kubenswrapper[4712]: I0130 16:57:16.023353 4712 generic.go:334] "Generic (PLEG): container finished" podID="48626025-5e2a-47c8-b317-bcbada105e87" containerID="4b1b72476b9e51b2129fe5dc1b953fd12a6e7bd7ae8b55ec86e9151d98b57eaf" exitCode=0 Jan 30 16:57:16 crc kubenswrapper[4712]: I0130 16:57:16.023403 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-27wq6" event={"ID":"48626025-5e2a-47c8-b317-bcbada105e87","Type":"ContainerDied","Data":"4b1b72476b9e51b2129fe5dc1b953fd12a6e7bd7ae8b55ec86e9151d98b57eaf"} Jan 30 16:57:16 crc kubenswrapper[4712]: I0130 16:57:16.684294 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 16:57:25 crc kubenswrapper[4712]: I0130 16:57:25.556531 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:25 crc kubenswrapper[4712]: I0130 16:57:25.557970 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:26 crc kubenswrapper[4712]: I0130 16:57:26.276599 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 16:57:27 crc kubenswrapper[4712]: I0130 16:57:27.341880 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-lpb6h"] Jan 30 16:57:32 crc kubenswrapper[4712]: I0130 16:57:32.055689 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.780033 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:57:33 crc kubenswrapper[4712]: E0130 16:57:33.780286 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0437cc32-fa1c-4c43-bce9-9ebe22d278f5" containerName="pruner" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.780297 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0437cc32-fa1c-4c43-bce9-9ebe22d278f5" containerName="pruner" Jan 30 16:57:33 crc kubenswrapper[4712]: E0130 16:57:33.780310 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" containerName="collect-profiles" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.780316 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" containerName="collect-profiles" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.780412 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" containerName="collect-profiles" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.780429 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0437cc32-fa1c-4c43-bce9-9ebe22d278f5" containerName="pruner" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.780878 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.783044 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.783053 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.785062 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.910653 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435b55b-9a94-4971-adc5-51773a0cf108-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:33 crc kubenswrapper[4712]: I0130 16:57:33.910739 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435b55b-9a94-4971-adc5-51773a0cf108-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:34 crc kubenswrapper[4712]: I0130 16:57:34.012312 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435b55b-9a94-4971-adc5-51773a0cf108-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:34 crc kubenswrapper[4712]: I0130 16:57:34.012391 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435b55b-9a94-4971-adc5-51773a0cf108-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:34 crc kubenswrapper[4712]: I0130 16:57:34.012832 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435b55b-9a94-4971-adc5-51773a0cf108-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:34 crc kubenswrapper[4712]: I0130 16:57:34.044141 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435b55b-9a94-4971-adc5-51773a0cf108-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:34 crc kubenswrapper[4712]: I0130 16:57:34.137473 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:57:35 crc kubenswrapper[4712]: I0130 16:57:35.555424 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:35 crc kubenswrapper[4712]: I0130 16:57:35.555536 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:35 crc kubenswrapper[4712]: E0130 16:57:35.716964 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:57:35 crc kubenswrapper[4712]: E0130 16:57:35.717233 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89mk9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-r9tqz_openshift-marketplace(f329fa29-ce56-44e1-9384-0347dbc67c55): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:35 crc kubenswrapper[4712]: E0130 16:57:35.718774 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-r9tqz" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" Jan 30 16:57:36 crc kubenswrapper[4712]: I0130 16:57:36.270824 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:57:36 crc kubenswrapper[4712]: I0130 16:57:36.270887 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:57:37 crc kubenswrapper[4712]: E0130 16:57:37.851268 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 30 16:57:37 crc kubenswrapper[4712]: E0130 16:57:37.851417 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x5x8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-dlkwf_openshift-marketplace(be58da2a-7470-403f-a094-ca2bac2dbccd): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:37 crc kubenswrapper[4712]: E0130 16:57:37.852578 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-dlkwf" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" Jan 30 16:57:38 crc kubenswrapper[4712]: I0130 16:57:38.969666 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:57:38 crc kubenswrapper[4712]: I0130 16:57:38.970862 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:38 crc kubenswrapper[4712]: I0130 16:57:38.992047 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.091734 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.092011 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kube-api-access\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.092070 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-var-lock\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.193172 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.193414 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kube-api-access\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.193493 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-var-lock\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.193459 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kubelet-dir\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.193660 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-var-lock\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.226212 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kube-api-access\") pod \"installer-9-crc\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:39 crc kubenswrapper[4712]: I0130 16:57:39.308549 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:57:44 crc kubenswrapper[4712]: E0130 16:57:44.073042 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 16:57:44 crc kubenswrapper[4712]: E0130 16:57:44.073528 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rk9t6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-4f5w5_openshift-marketplace(fda2fdd1-0c89-4398-8e0a-545311fe5ae9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:44 crc kubenswrapper[4712]: E0130 16:57:44.074729 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-4f5w5" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" Jan 30 16:57:44 crc kubenswrapper[4712]: E0130 16:57:44.752272 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 30 16:57:44 crc kubenswrapper[4712]: E0130 16:57:44.752468 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ms5bj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-qcfwq_openshift-marketplace(41581f8f-2b7b-4a20-9f3b-a28c0914b093): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:44 crc kubenswrapper[4712]: E0130 16:57:44.754019 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-qcfwq" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" Jan 30 16:57:45 crc kubenswrapper[4712]: I0130 16:57:45.555887 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:45 crc kubenswrapper[4712]: I0130 16:57:45.555960 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:45 crc kubenswrapper[4712]: E0130 16:57:45.711505 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-r9tqz" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" Jan 30 16:57:45 crc kubenswrapper[4712]: E0130 16:57:45.711854 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-4f5w5" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" Jan 30 16:57:45 crc kubenswrapper[4712]: E0130 16:57:45.712344 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-dlkwf" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" Jan 30 16:57:45 crc kubenswrapper[4712]: W0130 16:57:45.714309 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podabacbc6e_6514_4db6_80b5_23570952c86f.slice/crio-633549bb82caeb46218444a462b39c5fc25e0ef7f7f1292efa693b422e691b82 WatchSource:0}: Error finding container 633549bb82caeb46218444a462b39c5fc25e0ef7f7f1292efa693b422e691b82: Status 404 returned error can't find the container with id 633549bb82caeb46218444a462b39c5fc25e0ef7f7f1292efa693b422e691b82 Jan 30 16:57:46 crc kubenswrapper[4712]: I0130 16:57:46.231777 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" event={"ID":"abacbc6e-6514-4db6-80b5-23570952c86f","Type":"ContainerStarted","Data":"633549bb82caeb46218444a462b39c5fc25e0ef7f7f1292efa693b422e691b82"} Jan 30 16:57:47 crc kubenswrapper[4712]: E0130 16:57:47.051868 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:57:47 crc kubenswrapper[4712]: E0130 16:57:47.052070 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gt64c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-l4hp7_openshift-marketplace(1efcd5ba-0391-4427-aaa0-9cef2b10a48c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:47 crc kubenswrapper[4712]: E0130 16:57:47.053407 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-l4hp7" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" Jan 30 16:57:52 crc kubenswrapper[4712]: E0130 16:57:52.849522 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-qcfwq" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" Jan 30 16:57:52 crc kubenswrapper[4712]: E0130 16:57:52.849946 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-l4hp7" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" Jan 30 16:57:53 crc kubenswrapper[4712]: I0130 16:57:53.280130 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 30 16:57:53 crc kubenswrapper[4712]: W0130 16:57:53.280906 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7cd49b09_e90f_4bfd_b4a0_357240cac04d.slice/crio-dd013e32e31ab822041b1cfbc1ec5e7aa670233bc64e611a112f068bd94ee3c4 WatchSource:0}: Error finding container dd013e32e31ab822041b1cfbc1ec5e7aa670233bc64e611a112f068bd94ee3c4: Status 404 returned error can't find the container with id dd013e32e31ab822041b1cfbc1ec5e7aa670233bc64e611a112f068bd94ee3c4 Jan 30 16:57:53 crc kubenswrapper[4712]: I0130 16:57:53.377213 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 30 16:57:53 crc kubenswrapper[4712]: W0130 16:57:53.386596 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4435b55b_9a94_4971_adc5_51773a0cf108.slice/crio-fb53a1cc987509b0c3886959a554298183144b66c5f4b17a99c1cd04e58bcf75 WatchSource:0}: Error finding container fb53a1cc987509b0c3886959a554298183144b66c5f4b17a99c1cd04e58bcf75: Status 404 returned error can't find the container with id fb53a1cc987509b0c3886959a554298183144b66c5f4b17a99c1cd04e58bcf75 Jan 30 16:57:54 crc kubenswrapper[4712]: I0130 16:57:54.274011 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4435b55b-9a94-4971-adc5-51773a0cf108","Type":"ContainerStarted","Data":"fb53a1cc987509b0c3886959a554298183144b66c5f4b17a99c1cd04e58bcf75"} Jan 30 16:57:54 crc kubenswrapper[4712]: I0130 16:57:54.279922 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-27wq6" event={"ID":"48626025-5e2a-47c8-b317-bcbada105e87","Type":"ContainerStarted","Data":"87d242fddd7b5f8588cd5a66a546b6fec970e1947a68c0b33b6ef85684cebd06"} Jan 30 16:57:54 crc kubenswrapper[4712]: I0130 16:57:54.282777 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7cd49b09-e90f-4bfd-b4a0-357240cac04d","Type":"ContainerStarted","Data":"dd013e32e31ab822041b1cfbc1ec5e7aa670233bc64e611a112f068bd94ee3c4"} Jan 30 16:57:55 crc kubenswrapper[4712]: E0130 16:57:55.168602 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:57:55 crc kubenswrapper[4712]: E0130 16:57:55.169046 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tmg4t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-hzqrq_openshift-marketplace(fc1192c4-3b0c-4421-8e71-17e8731ffe34): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:55 crc kubenswrapper[4712]: E0130 16:57:55.170180 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-hzqrq" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.290723 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" event={"ID":"abacbc6e-6514-4db6-80b5-23570952c86f","Type":"ContainerStarted","Data":"efe1aaf5820d35903ad4ec85023dd63e7093262e4a003987c5ed5eae4b83ec92"} Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.291673 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.291790 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.291840 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.555365 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.555873 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.556080 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:55 crc kubenswrapper[4712]: I0130 16:57:55.556166 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:56 crc kubenswrapper[4712]: I0130 16:57:56.300306 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 30 16:57:56 crc kubenswrapper[4712]: I0130 16:57:56.300418 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 30 16:57:56 crc kubenswrapper[4712]: E0130 16:57:56.340759 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 16:57:56 crc kubenswrapper[4712]: E0130 16:57:56.340975 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5nsts,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-pz9vb_openshift-marketplace(b1773095-5051-4668-ae41-1d6c41c43a43): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:57:56 crc kubenswrapper[4712]: E0130 16:57:56.342116 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-pz9vb" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" Jan 30 16:58:01 crc kubenswrapper[4712]: E0130 16:58:01.215711 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 30 16:58:01 crc kubenswrapper[4712]: E0130 16:58:01.216156 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-swznj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-jmc9f_openshift-marketplace(0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 16:58:01 crc kubenswrapper[4712]: E0130 16:58:01.217482 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-jmc9f" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" Jan 30 16:58:05 crc kubenswrapper[4712]: I0130 16:58:05.561884 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-27wq6" Jan 30 16:58:06 crc kubenswrapper[4712]: I0130 16:58:06.270659 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 16:58:06 crc kubenswrapper[4712]: I0130 16:58:06.271207 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 16:58:06 crc kubenswrapper[4712]: I0130 16:58:06.271783 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 16:58:06 crc kubenswrapper[4712]: I0130 16:58:06.272537 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 16:58:06 crc kubenswrapper[4712]: I0130 16:58:06.272612 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5" gracePeriod=600 Jan 30 16:58:09 crc kubenswrapper[4712]: E0130 16:58:09.657353 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-jmc9f" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" Jan 30 16:58:10 crc kubenswrapper[4712]: I0130 16:58:10.374118 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5" exitCode=0 Jan 30 16:58:10 crc kubenswrapper[4712]: I0130 16:58:10.374642 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5"} Jan 30 16:58:11 crc kubenswrapper[4712]: I0130 16:58:11.384950 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7cd49b09-e90f-4bfd-b4a0-357240cac04d","Type":"ContainerStarted","Data":"4287635c33311fdd1fbae79a0dec75197c80fc426139d566b3b6f28fe546a276"} Jan 30 16:58:11 crc kubenswrapper[4712]: I0130 16:58:11.386860 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-lpb6h" event={"ID":"abacbc6e-6514-4db6-80b5-23570952c86f","Type":"ContainerStarted","Data":"0bf7e0b990f37bb403275aefa72e16d8108db2b737536d3a8658d28e87bb56e3"} Jan 30 16:58:11 crc kubenswrapper[4712]: I0130 16:58:11.402839 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=33.402817863 podStartE2EDuration="33.402817863s" podCreationTimestamp="2026-01-30 16:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:11.400642099 +0000 UTC m=+228.307651568" watchObservedRunningTime="2026-01-30 16:58:11.402817863 +0000 UTC m=+228.309827332" Jan 30 16:58:12 crc kubenswrapper[4712]: I0130 16:58:12.395595 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4435b55b-9a94-4971-adc5-51773a0cf108","Type":"ContainerStarted","Data":"29f462277f77fe33a6de04e1266c2565cee7986343e4d5833dbda1acec003ef0"} Jan 30 16:58:13 crc kubenswrapper[4712]: I0130 16:58:13.428069 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-lpb6h" podStartSLOduration=209.428048406 podStartE2EDuration="3m29.428048406s" podCreationTimestamp="2026-01-30 16:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:13.425451841 +0000 UTC m=+230.332461330" watchObservedRunningTime="2026-01-30 16:58:13.428048406 +0000 UTC m=+230.335057895" Jan 30 16:58:15 crc kubenswrapper[4712]: I0130 16:58:15.421422 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"d7b3cacd3abb88020219dba30c60b6f2729cab9aeaf86d8f857517015ac6486b"} Jan 30 16:58:15 crc kubenswrapper[4712]: I0130 16:58:15.441962 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=42.441930762 podStartE2EDuration="42.441930762s" podCreationTimestamp="2026-01-30 16:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:58:15.43648914 +0000 UTC m=+232.343498649" watchObservedRunningTime="2026-01-30 16:58:15.441930762 +0000 UTC m=+232.348940251" Jan 30 16:58:16 crc kubenswrapper[4712]: I0130 16:58:16.435614 4712 generic.go:334] "Generic (PLEG): container finished" podID="4435b55b-9a94-4971-adc5-51773a0cf108" containerID="29f462277f77fe33a6de04e1266c2565cee7986343e4d5833dbda1acec003ef0" exitCode=0 Jan 30 16:58:16 crc kubenswrapper[4712]: I0130 16:58:16.435888 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4435b55b-9a94-4971-adc5-51773a0cf108","Type":"ContainerDied","Data":"29f462277f77fe33a6de04e1266c2565cee7986343e4d5833dbda1acec003ef0"} Jan 30 16:58:22 crc kubenswrapper[4712]: I0130 16:58:22.929122 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:58:22 crc kubenswrapper[4712]: I0130 16:58:22.981908 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435b55b-9a94-4971-adc5-51773a0cf108-kube-api-access\") pod \"4435b55b-9a94-4971-adc5-51773a0cf108\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " Jan 30 16:58:22 crc kubenswrapper[4712]: I0130 16:58:22.981966 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435b55b-9a94-4971-adc5-51773a0cf108-kubelet-dir\") pod \"4435b55b-9a94-4971-adc5-51773a0cf108\" (UID: \"4435b55b-9a94-4971-adc5-51773a0cf108\") " Jan 30 16:58:22 crc kubenswrapper[4712]: I0130 16:58:22.982088 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4435b55b-9a94-4971-adc5-51773a0cf108-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4435b55b-9a94-4971-adc5-51773a0cf108" (UID: "4435b55b-9a94-4971-adc5-51773a0cf108"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:58:22 crc kubenswrapper[4712]: I0130 16:58:22.982254 4712 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435b55b-9a94-4971-adc5-51773a0cf108-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:22 crc kubenswrapper[4712]: I0130 16:58:22.990672 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4435b55b-9a94-4971-adc5-51773a0cf108-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4435b55b-9a94-4971-adc5-51773a0cf108" (UID: "4435b55b-9a94-4971-adc5-51773a0cf108"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:58:23 crc kubenswrapper[4712]: I0130 16:58:23.084001 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435b55b-9a94-4971-adc5-51773a0cf108-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:23 crc kubenswrapper[4712]: I0130 16:58:23.477076 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4435b55b-9a94-4971-adc5-51773a0cf108","Type":"ContainerDied","Data":"fb53a1cc987509b0c3886959a554298183144b66c5f4b17a99c1cd04e58bcf75"} Jan 30 16:58:23 crc kubenswrapper[4712]: I0130 16:58:23.477109 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb53a1cc987509b0c3886959a554298183144b66c5f4b17a99c1cd04e58bcf75" Jan 30 16:58:23 crc kubenswrapper[4712]: I0130 16:58:23.477157 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.507954 4712 generic.go:334] "Generic (PLEG): container finished" podID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerID="cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b" exitCode=0 Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.508043 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlkwf" event={"ID":"be58da2a-7470-403f-a094-ca2bac2dbccd","Type":"ContainerDied","Data":"cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.511556 4712 generic.go:334] "Generic (PLEG): container finished" podID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerID="065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853" exitCode=0 Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.511606 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcfwq" event={"ID":"41581f8f-2b7b-4a20-9f3b-a28c0914b093","Type":"ContainerDied","Data":"065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.514430 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerStarted","Data":"e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.516358 4712 generic.go:334] "Generic (PLEG): container finished" podID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerID="b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e" exitCode=0 Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.516413 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4hp7" event={"ID":"1efcd5ba-0391-4427-aaa0-9cef2b10a48c","Type":"ContainerDied","Data":"b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.520500 4712 generic.go:334] "Generic (PLEG): container finished" podID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerID="65fb9d57bcd444f16dbba66a7afdefa1c7a37cb175c6902c8e974d2ecabb7ea7" exitCode=0 Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.520541 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f5w5" event={"ID":"fda2fdd1-0c89-4398-8e0a-545311fe5ae9","Type":"ContainerDied","Data":"65fb9d57bcd444f16dbba66a7afdefa1c7a37cb175c6902c8e974d2ecabb7ea7"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.525289 4712 generic.go:334] "Generic (PLEG): container finished" podID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerID="81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593" exitCode=0 Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.525360 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9tqz" event={"ID":"f329fa29-ce56-44e1-9384-0347dbc67c55","Type":"ContainerDied","Data":"81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.535982 4712 generic.go:334] "Generic (PLEG): container finished" podID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerID="18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87" exitCode=0 Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.536068 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerDied","Data":"18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87"} Jan 30 16:58:28 crc kubenswrapper[4712]: I0130 16:58:28.539469 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerStarted","Data":"acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4"} Jan 30 16:58:29 crc kubenswrapper[4712]: I0130 16:58:29.548510 4712 generic.go:334] "Generic (PLEG): container finished" podID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerID="acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4" exitCode=0 Jan 30 16:58:29 crc kubenswrapper[4712]: I0130 16:58:29.548598 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerDied","Data":"acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4"} Jan 30 16:58:29 crc kubenswrapper[4712]: I0130 16:58:29.552692 4712 generic.go:334] "Generic (PLEG): container finished" podID="b1773095-5051-4668-ae41-1d6c41c43a43" containerID="e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7" exitCode=0 Jan 30 16:58:29 crc kubenswrapper[4712]: I0130 16:58:29.552753 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerDied","Data":"e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7"} Jan 30 16:58:33 crc kubenswrapper[4712]: I0130 16:58:33.584053 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9tqz" event={"ID":"f329fa29-ce56-44e1-9384-0347dbc67c55","Type":"ContainerStarted","Data":"fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c"} Jan 30 16:58:34 crc kubenswrapper[4712]: I0130 16:58:34.617638 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-r9tqz" podStartSLOduration=6.132390961 podStartE2EDuration="1m41.617616005s" podCreationTimestamp="2026-01-30 16:56:53 +0000 UTC" firstStartedPulling="2026-01-30 16:56:56.646980407 +0000 UTC m=+153.553989876" lastFinishedPulling="2026-01-30 16:58:32.132205451 +0000 UTC m=+249.039214920" observedRunningTime="2026-01-30 16:58:34.613482636 +0000 UTC m=+251.520492175" watchObservedRunningTime="2026-01-30 16:58:34.617616005 +0000 UTC m=+251.524625484" Jan 30 16:58:44 crc kubenswrapper[4712]: I0130 16:58:44.245084 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:58:44 crc kubenswrapper[4712]: I0130 16:58:44.245573 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:58:45 crc kubenswrapper[4712]: I0130 16:58:45.142219 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:58:45 crc kubenswrapper[4712]: I0130 16:58:45.202533 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:58:45 crc kubenswrapper[4712]: I0130 16:58:45.385949 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r9tqz"] Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.758614 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f5w5" event={"ID":"fda2fdd1-0c89-4398-8e0a-545311fe5ae9","Type":"ContainerStarted","Data":"c516827b18fc293b250a7445e45356829e42c68cec9d9c06f7b819553b51ac2d"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.761376 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerStarted","Data":"f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.763617 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerStarted","Data":"c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.765948 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlkwf" event={"ID":"be58da2a-7470-403f-a094-ca2bac2dbccd","Type":"ContainerStarted","Data":"a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.768125 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcfwq" event={"ID":"41581f8f-2b7b-4a20-9f3b-a28c0914b093","Type":"ContainerStarted","Data":"407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.770087 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerStarted","Data":"9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.771968 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-r9tqz" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="registry-server" containerID="cri-o://fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c" gracePeriod=2 Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.772031 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4hp7" event={"ID":"1efcd5ba-0391-4427-aaa0-9cef2b10a48c","Type":"ContainerStarted","Data":"f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02"} Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.780471 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4f5w5" podStartSLOduration=13.162998194 podStartE2EDuration="1m53.780456301s" podCreationTimestamp="2026-01-30 16:56:53 +0000 UTC" firstStartedPulling="2026-01-30 16:56:56.616306356 +0000 UTC m=+153.523315835" lastFinishedPulling="2026-01-30 16:58:37.233764473 +0000 UTC m=+254.140773942" observedRunningTime="2026-01-30 16:58:46.778886589 +0000 UTC m=+263.685896058" watchObservedRunningTime="2026-01-30 16:58:46.780456301 +0000 UTC m=+263.687465770" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.803671 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pz9vb" podStartSLOduration=3.414060814 podStartE2EDuration="1m50.803655857s" podCreationTimestamp="2026-01-30 16:56:56 +0000 UTC" firstStartedPulling="2026-01-30 16:56:57.734018452 +0000 UTC m=+154.641027921" lastFinishedPulling="2026-01-30 16:58:45.123613495 +0000 UTC m=+262.030622964" observedRunningTime="2026-01-30 16:58:46.801630336 +0000 UTC m=+263.708639795" watchObservedRunningTime="2026-01-30 16:58:46.803655857 +0000 UTC m=+263.710665326" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.812787 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.812855 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.866967 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hzqrq" podStartSLOduration=4.567422057 podStartE2EDuration="1m50.86694951s" podCreationTimestamp="2026-01-30 16:56:56 +0000 UTC" firstStartedPulling="2026-01-30 16:56:58.824688834 +0000 UTC m=+155.731698303" lastFinishedPulling="2026-01-30 16:58:45.124216287 +0000 UTC m=+262.031225756" observedRunningTime="2026-01-30 16:58:46.834711672 +0000 UTC m=+263.741721141" watchObservedRunningTime="2026-01-30 16:58:46.86694951 +0000 UTC m=+263.773958979" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.891492 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dlkwf" podStartSLOduration=3.628159711 podStartE2EDuration="1m53.891475592s" podCreationTimestamp="2026-01-30 16:56:53 +0000 UTC" firstStartedPulling="2026-01-30 16:56:55.571407469 +0000 UTC m=+152.478416938" lastFinishedPulling="2026-01-30 16:58:45.83472335 +0000 UTC m=+262.741732819" observedRunningTime="2026-01-30 16:58:46.870225275 +0000 UTC m=+263.777234744" watchObservedRunningTime="2026-01-30 16:58:46.891475592 +0000 UTC m=+263.798485061" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.892342 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jmc9f" podStartSLOduration=5.056051076 podStartE2EDuration="1m51.89233438s" podCreationTimestamp="2026-01-30 16:56:55 +0000 UTC" firstStartedPulling="2026-01-30 16:56:56.656485477 +0000 UTC m=+153.563494946" lastFinishedPulling="2026-01-30 16:58:43.492768781 +0000 UTC m=+260.399778250" observedRunningTime="2026-01-30 16:58:46.888740517 +0000 UTC m=+263.795749996" watchObservedRunningTime="2026-01-30 16:58:46.89233438 +0000 UTC m=+263.799343849" Jan 30 16:58:46 crc kubenswrapper[4712]: I0130 16:58:46.942736 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l4hp7" podStartSLOduration=9.45375442 podStartE2EDuration="1m51.942722083s" podCreationTimestamp="2026-01-30 16:56:55 +0000 UTC" firstStartedPulling="2026-01-30 16:56:57.775821131 +0000 UTC m=+154.682830600" lastFinishedPulling="2026-01-30 16:58:40.264788784 +0000 UTC m=+257.171798263" observedRunningTime="2026-01-30 16:58:46.919986226 +0000 UTC m=+263.826995685" watchObservedRunningTime="2026-01-30 16:58:46.942722083 +0000 UTC m=+263.849731552" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.202537 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.202896 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.272183 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.291651 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcfwq" podStartSLOduration=4.097257376 podStartE2EDuration="1m54.291629616s" podCreationTimestamp="2026-01-30 16:56:53 +0000 UTC" firstStartedPulling="2026-01-30 16:56:55.580060298 +0000 UTC m=+152.487069757" lastFinishedPulling="2026-01-30 16:58:45.774432518 +0000 UTC m=+262.681441997" observedRunningTime="2026-01-30 16:58:46.944013379 +0000 UTC m=+263.851022858" watchObservedRunningTime="2026-01-30 16:58:47.291629616 +0000 UTC m=+264.198639085" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.443082 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-catalog-content\") pod \"f329fa29-ce56-44e1-9384-0347dbc67c55\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.443164 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89mk9\" (UniqueName: \"kubernetes.io/projected/f329fa29-ce56-44e1-9384-0347dbc67c55-kube-api-access-89mk9\") pod \"f329fa29-ce56-44e1-9384-0347dbc67c55\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.443220 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-utilities\") pod \"f329fa29-ce56-44e1-9384-0347dbc67c55\" (UID: \"f329fa29-ce56-44e1-9384-0347dbc67c55\") " Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.443892 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-utilities" (OuterVolumeSpecName: "utilities") pod "f329fa29-ce56-44e1-9384-0347dbc67c55" (UID: "f329fa29-ce56-44e1-9384-0347dbc67c55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.444192 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.450915 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f329fa29-ce56-44e1-9384-0347dbc67c55-kube-api-access-89mk9" (OuterVolumeSpecName: "kube-api-access-89mk9") pod "f329fa29-ce56-44e1-9384-0347dbc67c55" (UID: "f329fa29-ce56-44e1-9384-0347dbc67c55"). InnerVolumeSpecName "kube-api-access-89mk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.504478 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f329fa29-ce56-44e1-9384-0347dbc67c55" (UID: "f329fa29-ce56-44e1-9384-0347dbc67c55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.545698 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89mk9\" (UniqueName: \"kubernetes.io/projected/f329fa29-ce56-44e1-9384-0347dbc67c55-kube-api-access-89mk9\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.545732 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f329fa29-ce56-44e1-9384-0347dbc67c55-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.778771 4712 generic.go:334] "Generic (PLEG): container finished" podID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerID="fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c" exitCode=0 Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.779777 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-r9tqz" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.780862 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9tqz" event={"ID":"f329fa29-ce56-44e1-9384-0347dbc67c55","Type":"ContainerDied","Data":"fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c"} Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.780913 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-r9tqz" event={"ID":"f329fa29-ce56-44e1-9384-0347dbc67c55","Type":"ContainerDied","Data":"1ec90d22456bb0c69513a53bd3db2d010c0bf5e4bd65ad667822ade915d33127"} Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.780930 4712 scope.go:117] "RemoveContainer" containerID="fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.799891 4712 scope.go:117] "RemoveContainer" containerID="81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.808012 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-r9tqz"] Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.810414 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-r9tqz"] Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.831679 4712 scope.go:117] "RemoveContainer" containerID="c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.845310 4712 scope.go:117] "RemoveContainer" containerID="fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c" Jan 30 16:58:47 crc kubenswrapper[4712]: E0130 16:58:47.845711 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c\": container with ID starting with fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c not found: ID does not exist" containerID="fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.845741 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c"} err="failed to get container status \"fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c\": rpc error: code = NotFound desc = could not find container \"fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c\": container with ID starting with fe3c739862d62f094dac1b21d8d957f900e5a2c047e70e43c4d3d86c320cb60c not found: ID does not exist" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.845760 4712 scope.go:117] "RemoveContainer" containerID="81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593" Jan 30 16:58:47 crc kubenswrapper[4712]: E0130 16:58:47.846039 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593\": container with ID starting with 81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593 not found: ID does not exist" containerID="81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.846063 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593"} err="failed to get container status \"81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593\": rpc error: code = NotFound desc = could not find container \"81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593\": container with ID starting with 81ee0cefd682cf0b67045bcf905a2fb28823c6c6af3646054a993093c2838593 not found: ID does not exist" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.846076 4712 scope.go:117] "RemoveContainer" containerID="c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7" Jan 30 16:58:47 crc kubenswrapper[4712]: E0130 16:58:47.846285 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7\": container with ID starting with c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7 not found: ID does not exist" containerID="c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.846310 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7"} err="failed to get container status \"c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7\": rpc error: code = NotFound desc = could not find container \"c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7\": container with ID starting with c9475c5b67888fefddc49204422cbd041efbf5a06213e6b0687c4fb1442569f7 not found: ID does not exist" Jan 30 16:58:47 crc kubenswrapper[4712]: I0130 16:58:47.852912 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-pz9vb" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="registry-server" probeResult="failure" output=< Jan 30 16:58:47 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 16:58:47 crc kubenswrapper[4712]: > Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.254738 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hzqrq" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="registry-server" probeResult="failure" output=< Jan 30 16:58:48 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 16:58:48 crc kubenswrapper[4712]: > Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.995112 4712 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.995472 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958" gracePeriod=15 Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.995522 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e" gracePeriod=15 Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.995627 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54" gracePeriod=15 Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.995693 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5" gracePeriod=15 Jan 30 16:58:48 crc kubenswrapper[4712]: I0130 16:58:48.995709 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291" gracePeriod=15 Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000428 4712 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.000859 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000879 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.000891 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000898 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.000909 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000938 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.000952 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="registry-server" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000959 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="registry-server" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.000973 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000980 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.000991 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="extract-content" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.000998 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="extract-content" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.001007 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001016 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.001032 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4435b55b-9a94-4971-adc5-51773a0cf108" containerName="pruner" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001039 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4435b55b-9a94-4971-adc5-51773a0cf108" containerName="pruner" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.001050 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001057 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.001066 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001073 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.001084 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="extract-utilities" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001090 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="extract-utilities" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001208 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001234 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" containerName="registry-server" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001244 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001254 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001263 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001272 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="4435b55b-9a94-4971-adc5-51773a0cf108" containerName="pruner" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001281 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001293 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.001422 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001582 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.001693 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.002888 4712 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.003450 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.006346 4712 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.043890 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166005 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166051 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166446 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166533 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166567 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166587 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166638 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.166658 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268052 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268327 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268363 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268402 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268429 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268460 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268484 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268508 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268513 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268550 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268691 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268715 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268756 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268778 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268847 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.268890 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.341574 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:58:49 crc kubenswrapper[4712]: W0130 16:58:49.370189 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-6513baa8902f10eef1c9e6f60cbc969327ced95d5ccd3fbf32901eb73d58c413 WatchSource:0}: Error finding container 6513baa8902f10eef1c9e6f60cbc969327ced95d5ccd3fbf32901eb73d58c413: Status 404 returned error can't find the container with id 6513baa8902f10eef1c9e6f60cbc969327ced95d5ccd3fbf32901eb73d58c413 Jan 30 16:58:49 crc kubenswrapper[4712]: E0130 16:58:49.378993 4712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f90bf2ebba243 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:58:49.377096259 +0000 UTC m=+266.284105728,LastTimestamp:2026-01-30 16:58:49.377096259 +0000 UTC m=+266.284105728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.791687 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"31a87cbc364daa9be2641b2aeb2682665571bce727fd470fb55b88b92b119014"} Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.791742 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6513baa8902f10eef1c9e6f60cbc969327ced95d5ccd3fbf32901eb73d58c413"} Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.792417 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.793673 4712 generic.go:334] "Generic (PLEG): container finished" podID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" containerID="4287635c33311fdd1fbae79a0dec75197c80fc426139d566b3b6f28fe546a276" exitCode=0 Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.793741 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7cd49b09-e90f-4bfd-b4a0-357240cac04d","Type":"ContainerDied","Data":"4287635c33311fdd1fbae79a0dec75197c80fc426139d566b3b6f28fe546a276"} Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.794401 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.794745 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.795742 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.797586 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.798557 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e" exitCode=0 Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.798583 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54" exitCode=0 Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.798592 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5" exitCode=0 Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.798600 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291" exitCode=2 Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.806945 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f329fa29-ce56-44e1-9384-0347dbc67c55" path="/var/lib/kubelet/pods/f329fa29-ce56-44e1-9384-0347dbc67c55/volumes" Jan 30 16:58:49 crc kubenswrapper[4712]: I0130 16:58:49.807873 4712 scope.go:117] "RemoveContainer" containerID="e637fe3626cc4a10c5706cce0e8db606a6831d898ac090769d1b8316da46980e" Jan 30 16:58:50 crc kubenswrapper[4712]: I0130 16:58:50.806787 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.048234 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.048940 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.049299 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.190938 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kube-api-access\") pod \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.191442 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-var-lock\") pod \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.191535 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-var-lock" (OuterVolumeSpecName: "var-lock") pod "7cd49b09-e90f-4bfd-b4a0-357240cac04d" (UID: "7cd49b09-e90f-4bfd-b4a0-357240cac04d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.191604 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kubelet-dir\") pod \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\" (UID: \"7cd49b09-e90f-4bfd-b4a0-357240cac04d\") " Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.191671 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7cd49b09-e90f-4bfd-b4a0-357240cac04d" (UID: "7cd49b09-e90f-4bfd-b4a0-357240cac04d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.191897 4712 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.191925 4712 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.196377 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7cd49b09-e90f-4bfd-b4a0-357240cac04d" (UID: "7cd49b09-e90f-4bfd-b4a0-357240cac04d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.293262 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7cd49b09-e90f-4bfd-b4a0-357240cac04d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.468443 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.470330 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.471469 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.471666 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.471969 4712 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.596974 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597050 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597092 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597104 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597125 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597147 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597409 4712 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597425 4712 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.597437 4712 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.807381 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.815579 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.816291 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958" exitCode=0 Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.816389 4712 scope.go:117] "RemoveContainer" containerID="65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.816427 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.817167 4712 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.817618 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.818058 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.821639 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"7cd49b09-e90f-4bfd-b4a0-357240cac04d","Type":"ContainerDied","Data":"dd013e32e31ab822041b1cfbc1ec5e7aa670233bc64e611a112f068bd94ee3c4"} Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.821669 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd013e32e31ab822041b1cfbc1ec5e7aa670233bc64e611a112f068bd94ee3c4" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.821690 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.842310 4712 scope.go:117] "RemoveContainer" containerID="c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.848237 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.848874 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.849645 4712 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.851395 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.851705 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.852444 4712 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.868431 4712 scope.go:117] "RemoveContainer" containerID="9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.888592 4712 scope.go:117] "RemoveContainer" containerID="b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.910082 4712 scope.go:117] "RemoveContainer" containerID="4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.926862 4712 scope.go:117] "RemoveContainer" containerID="9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.943024 4712 scope.go:117] "RemoveContainer" containerID="65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e" Jan 30 16:58:51 crc kubenswrapper[4712]: E0130 16:58:51.943464 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\": container with ID starting with 65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e not found: ID does not exist" containerID="65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.943494 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e"} err="failed to get container status \"65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\": rpc error: code = NotFound desc = could not find container \"65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e\": container with ID starting with 65d979ececdc18afdb1f364feaa3244db42fef8e523e58fc43b94b1775f9530e not found: ID does not exist" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.943514 4712 scope.go:117] "RemoveContainer" containerID="c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54" Jan 30 16:58:51 crc kubenswrapper[4712]: E0130 16:58:51.943767 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\": container with ID starting with c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54 not found: ID does not exist" containerID="c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.943811 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54"} err="failed to get container status \"c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\": rpc error: code = NotFound desc = could not find container \"c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54\": container with ID starting with c43497322318a98a6163818d677895cbf9749a3991c8ceefd44ebf0217509f54 not found: ID does not exist" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.943831 4712 scope.go:117] "RemoveContainer" containerID="9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5" Jan 30 16:58:51 crc kubenswrapper[4712]: E0130 16:58:51.946095 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\": container with ID starting with 9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5 not found: ID does not exist" containerID="9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.946119 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5"} err="failed to get container status \"9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\": rpc error: code = NotFound desc = could not find container \"9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5\": container with ID starting with 9169c66b6528039de4f78396a5d12b0f78fbec424e642bcb626e86f2ab19aac5 not found: ID does not exist" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.946132 4712 scope.go:117] "RemoveContainer" containerID="b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291" Jan 30 16:58:51 crc kubenswrapper[4712]: E0130 16:58:51.946699 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\": container with ID starting with b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291 not found: ID does not exist" containerID="b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.946725 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291"} err="failed to get container status \"b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\": rpc error: code = NotFound desc = could not find container \"b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291\": container with ID starting with b3b6646b713187bf8efa4688b672b306e130bc189b07ae3787c493becbf15291 not found: ID does not exist" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.946745 4712 scope.go:117] "RemoveContainer" containerID="4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958" Jan 30 16:58:51 crc kubenswrapper[4712]: E0130 16:58:51.946966 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\": container with ID starting with 4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958 not found: ID does not exist" containerID="4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.946989 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958"} err="failed to get container status \"4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\": rpc error: code = NotFound desc = could not find container \"4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958\": container with ID starting with 4d8ce11ead94e77a5804da9c855e24eecdda661a29dce4ab9daf7f8792f4c958 not found: ID does not exist" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.947002 4712 scope.go:117] "RemoveContainer" containerID="9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77" Jan 30 16:58:51 crc kubenswrapper[4712]: E0130 16:58:51.947393 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\": container with ID starting with 9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77 not found: ID does not exist" containerID="9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77" Jan 30 16:58:51 crc kubenswrapper[4712]: I0130 16:58:51.947442 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77"} err="failed to get container status \"9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\": rpc error: code = NotFound desc = could not find container \"9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77\": container with ID starting with 9efb7257bb8e9e1a7783a00a1712309ca5ea6d23dbd38638a79f7f2681df8c77 not found: ID does not exist" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.628735 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.628785 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.672499 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.673487 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.673775 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.674217 4712 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.674600 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.790759 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.791050 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.802066 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.802381 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.802593 4712 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.802747 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.831770 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.832078 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.832383 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.832778 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.833018 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.875723 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.876437 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.876738 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.876967 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.877263 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.881635 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.882008 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.882339 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.883173 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: I0130 16:58:53.883908 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:53 crc kubenswrapper[4712]: E0130 16:58:53.955134 4712 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.246:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188f90bf2ebba243 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-30 16:58:49.377096259 +0000 UTC m=+266.284105728,LastTimestamp:2026-01-30 16:58:49.377096259 +0000 UTC m=+266.284105728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.300949 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.301034 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.344139 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.344655 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.345174 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.345674 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.346286 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.346716 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.885034 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.885612 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.885972 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.886234 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.886506 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:54 crc kubenswrapper[4712]: I0130 16:58:54.886752 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.144020 4712 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.144487 4712 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.144894 4712 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.145180 4712 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.145426 4712 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.145463 4712 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.146528 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="200ms" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.347879 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="400ms" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.590663 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.590708 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.632614 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.633505 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.634040 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.634585 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.635003 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.635340 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.635872 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.736651 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:58:55Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:58:55Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:58:55Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-30T16:58:55Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:318e7c877b3cf6c5b263eeb634c46a3f24a2c88cd95c89829287f19b1a6f8bab\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:36ccdfb4dced86283da1b94956e2e4a71df6b016812849741c7a3c8867892f8f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1679208681},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:1be9df9846a1afdcabb94b502538e28b99b6748cc22415f1be58ab4cb7a391b8\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:9f846e202c62c9de285e0af13de8057685dff0d285709f110f88725e10d32d82\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202160358},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:5ed6e8538941f0bc4738c257950408a478a70216d4c176f024bc1b86dee2d26c\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7532a853b9d5b5501f112e5d14e6a62aa0d462504045320f28b9a10808130d73\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187205872},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.737115 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.737639 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.738084 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.738469 4712 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.738494 4712 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 30 16:58:55 crc kubenswrapper[4712]: E0130 16:58:55.748949 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="800ms" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.889331 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.890350 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.890858 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.891346 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.891699 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.892210 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:55 crc kubenswrapper[4712]: I0130 16:58:55.892619 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.007179 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.007214 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.055611 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.056661 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.057265 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.057539 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.057768 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.058005 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.058229 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.058448 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: E0130 16:58:56.550009 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="1.6s" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.856002 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.858013 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.858301 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.858721 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.858987 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.859190 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.859474 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.860461 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.860896 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.898085 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.898597 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.898949 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.899191 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.899426 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.899591 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.899994 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.900045 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.900251 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.900439 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.900634 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.900862 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.901017 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.901193 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.901380 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.901531 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.901681 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:56 crc kubenswrapper[4712]: I0130 16:58:56.902166 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.240129 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.240716 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.241160 4712 status_manager.go:851] "Failed to get status for pod" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" pod="openshift-marketplace/redhat-operators-hzqrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hzqrq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.241592 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.242347 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.242562 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.242917 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.243279 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.243521 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.243749 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.276836 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.277493 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.277917 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.278161 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.278504 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.278765 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.279060 4712 status_manager.go:851] "Failed to get status for pod" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" pod="openshift-marketplace/redhat-operators-hzqrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hzqrq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.279290 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.279624 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:57 crc kubenswrapper[4712]: I0130 16:58:57.279952 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:58:58 crc kubenswrapper[4712]: E0130 16:58:58.151698 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="3.2s" Jan 30 16:59:01 crc kubenswrapper[4712]: E0130 16:59:01.354021 4712 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.246:6443: connect: connection refused" interval="6.4s" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.799024 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.799646 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.800156 4712 status_manager.go:851] "Failed to get status for pod" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" pod="openshift-marketplace/redhat-operators-hzqrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hzqrq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.800409 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.800669 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.800981 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.801264 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.801458 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.801614 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.801811 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.815214 4712 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.815238 4712 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:01 crc kubenswrapper[4712]: E0130 16:59:01.815670 4712 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.816215 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.890293 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"8b2391b768a66c9cd25e48ab0777cf6aaa1254e18b6d45b0f1da34dfa91a2356"} Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.894022 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.894076 4712 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771" exitCode=1 Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.894102 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771"} Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.894574 4712 scope.go:117] "RemoveContainer" containerID="a69c429601afedf05aba7f92c944157a326f0ad130b39fe90317aeb530cdd771" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.894844 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.895225 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.895585 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.895884 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.896157 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.896482 4712 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.896814 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.897040 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.897267 4712 status_manager.go:851] "Failed to get status for pod" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" pod="openshift-marketplace/redhat-operators-hzqrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hzqrq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:01 crc kubenswrapper[4712]: I0130 16:59:01.897551 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.902927 4712 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="048bc63420fb5db8eec678c89b54dc0e1525287f2948b886014cac0f94c42bfd" exitCode=0 Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.902980 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"048bc63420fb5db8eec678c89b54dc0e1525287f2948b886014cac0f94c42bfd"} Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.903293 4712 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.903517 4712 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:02 crc kubenswrapper[4712]: E0130 16:59:02.904258 4712 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.904345 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.904871 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.905204 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.905472 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.905931 4712 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.906697 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.907196 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.907661 4712 status_manager.go:851] "Failed to get status for pod" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" pod="openshift-marketplace/redhat-operators-hzqrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hzqrq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.908088 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.908415 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.908458 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0bafe9db23a1c0c9cd7564ed04d11ec2807c851f5a2d2c928305b81e4c8e1709"} Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.908570 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.909197 4712 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.909541 4712 status_manager.go:851] "Failed to get status for pod" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" pod="openshift-marketplace/certified-operators-qcfwq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-qcfwq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.910401 4712 status_manager.go:851] "Failed to get status for pod" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" pod="openshift-marketplace/redhat-operators-pz9vb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-pz9vb\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.911135 4712 status_manager.go:851] "Failed to get status for pod" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" pod="openshift-marketplace/redhat-operators-hzqrq" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-hzqrq\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.912666 4712 status_manager.go:851] "Failed to get status for pod" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" pod="openshift-marketplace/community-operators-dlkwf" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-dlkwf\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.913113 4712 status_manager.go:851] "Failed to get status for pod" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" pod="openshift-marketplace/redhat-marketplace-jmc9f" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-jmc9f\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.913473 4712 status_manager.go:851] "Failed to get status for pod" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" pod="openshift-marketplace/certified-operators-4f5w5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-4f5w5\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.914049 4712 status_manager.go:851] "Failed to get status for pod" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" pod="openshift-marketplace/redhat-marketplace-l4hp7" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-l4hp7\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.914577 4712 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:02 crc kubenswrapper[4712]: I0130 16:59:02.915121 4712 status_manager.go:851] "Failed to get status for pod" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.246:6443: connect: connection refused" Jan 30 16:59:03 crc kubenswrapper[4712]: I0130 16:59:03.917169 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ccabffd54159fd08c862034580333c7789ec622562fe7bd38892ad4c0b92d363"} Jan 30 16:59:03 crc kubenswrapper[4712]: I0130 16:59:03.917428 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9939c384514caac04c7fd74d940d063e9dd641df80685c6542cd183f71f5f53d"} Jan 30 16:59:03 crc kubenswrapper[4712]: I0130 16:59:03.917438 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6cca1a0efed45de55d99ae772eb094665e25ce4cde1716c4987abb39dab905f3"} Jan 30 16:59:03 crc kubenswrapper[4712]: I0130 16:59:03.917446 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9c6a22be94881b2bb769d2a8e5d7f706ddad5577ac584805feede8765301578c"} Jan 30 16:59:04 crc kubenswrapper[4712]: I0130 16:59:04.926188 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"69246a7e72d037203c6bde5cc389db8d2a55392bc6af84ae50942d1f3b3df4a2"} Jan 30 16:59:04 crc kubenswrapper[4712]: I0130 16:59:04.926529 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:04 crc kubenswrapper[4712]: I0130 16:59:04.926534 4712 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:04 crc kubenswrapper[4712]: I0130 16:59:04.926565 4712 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:06 crc kubenswrapper[4712]: I0130 16:59:06.817978 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:06 crc kubenswrapper[4712]: I0130 16:59:06.818439 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:06 crc kubenswrapper[4712]: I0130 16:59:06.826324 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:08 crc kubenswrapper[4712]: I0130 16:59:08.221443 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:59:08 crc kubenswrapper[4712]: I0130 16:59:08.226522 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:59:08 crc kubenswrapper[4712]: I0130 16:59:08.945280 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:59:09 crc kubenswrapper[4712]: I0130 16:59:09.935564 4712 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:10 crc kubenswrapper[4712]: I0130 16:59:10.953502 4712 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:10 crc kubenswrapper[4712]: I0130 16:59:10.953539 4712 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:10 crc kubenswrapper[4712]: I0130 16:59:10.958210 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:10 crc kubenswrapper[4712]: I0130 16:59:10.960720 4712 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fb7874d0-339a-4bb0-a14b-b3b605fb0a87" Jan 30 16:59:11 crc kubenswrapper[4712]: I0130 16:59:11.958068 4712 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:11 crc kubenswrapper[4712]: I0130 16:59:11.958399 4712 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8ab27748-3507-429f-888b-b45b4d17b014" Jan 30 16:59:13 crc kubenswrapper[4712]: I0130 16:59:13.818570 4712 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="fb7874d0-339a-4bb0-a14b-b3b605fb0a87" Jan 30 16:59:18 crc kubenswrapper[4712]: I0130 16:59:18.895548 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 30 16:59:20 crc kubenswrapper[4712]: I0130 16:59:20.253155 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 30 16:59:20 crc kubenswrapper[4712]: I0130 16:59:20.279631 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 30 16:59:20 crc kubenswrapper[4712]: I0130 16:59:20.338393 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.003107 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.330357 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.364960 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.464713 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.465765 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.589829 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.652232 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.661053 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.731681 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 30 16:59:21 crc kubenswrapper[4712]: I0130 16:59:21.986592 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.095944 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.135248 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.179585 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.231433 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.473008 4712 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.495160 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.712041 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.712770 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.761178 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 30 16:59:22 crc kubenswrapper[4712]: I0130 16:59:22.926475 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.025393 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.125659 4712 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.128374 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.130299 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.186107 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.219955 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.367828 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.483932 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.602908 4712 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.744987 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.824124 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.889051 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 30 16:59:23 crc kubenswrapper[4712]: I0130 16:59:23.985738 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.203708 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.203972 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.353180 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.401707 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.542018 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.553901 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.638058 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.649938 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.749735 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.753921 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.771734 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.865382 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.891634 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.916688 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 30 16:59:24 crc kubenswrapper[4712]: I0130 16:59:24.983297 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.029954 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.032326 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.085425 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.095928 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.271214 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.302644 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.337080 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.387150 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.417926 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.462583 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.463590 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.473941 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.496049 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.499852 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.519577 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.527411 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.757741 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.764005 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.831919 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.857694 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.874222 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.925575 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 30 16:59:25 crc kubenswrapper[4712]: I0130 16:59:25.949988 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.016903 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.110958 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.157945 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.197605 4712 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.200240 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=37.200225169 podStartE2EDuration="37.200225169s" podCreationTimestamp="2026-01-30 16:58:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:09.555494335 +0000 UTC m=+286.462503804" watchObservedRunningTime="2026-01-30 16:59:26.200225169 +0000 UTC m=+303.107234638" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.201931 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.201968 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.207279 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.221056 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=17.221023817 podStartE2EDuration="17.221023817s" podCreationTimestamp="2026-01-30 16:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 16:59:26.216143789 +0000 UTC m=+303.123153268" watchObservedRunningTime="2026-01-30 16:59:26.221023817 +0000 UTC m=+303.128033286" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.273123 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.302446 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.383602 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.525323 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.569760 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.608749 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.620720 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.648543 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.658107 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.717098 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.761036 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.763581 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.839451 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.949227 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 30 16:59:26 crc kubenswrapper[4712]: I0130 16:59:26.970736 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.010395 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.011626 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.021442 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.147411 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.213118 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.255156 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.300637 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.309833 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.356097 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.380629 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.405672 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.431195 4712 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.458864 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.546860 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.711583 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.758846 4712 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.822839 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.823233 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.826595 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.867396 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.875763 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.937464 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 30 16:59:27 crc kubenswrapper[4712]: I0130 16:59:27.978586 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.045570 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.174818 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.245487 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.317883 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.326715 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.364966 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.418360 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.435719 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.452976 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.523188 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.525501 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.532901 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.558451 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.593421 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.650889 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.890227 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.917168 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.926717 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 30 16:59:28 crc kubenswrapper[4712]: I0130 16:59:28.995211 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.002186 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.062525 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.125350 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.159785 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.171528 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.175342 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.198581 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.402498 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.514267 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.536960 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.564072 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.714174 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.725306 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.832861 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 30 16:59:29 crc kubenswrapper[4712]: I0130 16:59:29.870680 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.077217 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.120365 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.173227 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.421846 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.432686 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.452654 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.488263 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.489209 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.521561 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.539847 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.540477 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.565932 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.567285 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.579823 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.599360 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.633417 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.699017 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.699835 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.821595 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.888849 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.904207 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 30 16:59:30 crc kubenswrapper[4712]: I0130 16:59:30.969315 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.072426 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.167713 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.224535 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.240667 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.307958 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.363947 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.409561 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.447013 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.471930 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.499061 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.783130 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.871445 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 30 16:59:31 crc kubenswrapper[4712]: I0130 16:59:31.956647 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.089302 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.099049 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.191650 4712 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.191974 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://31a87cbc364daa9be2641b2aeb2682665571bce727fd470fb55b88b92b119014" gracePeriod=5 Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.223650 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.225180 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.297073 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.457322 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.525585 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.613353 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.679485 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.711633 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.715511 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.768598 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.798028 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.806552 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.823919 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 30 16:59:32 crc kubenswrapper[4712]: I0130 16:59:32.880426 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.029487 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.102312 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.122846 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.159957 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.208409 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.307706 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.314662 4712 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.393550 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.447647 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.514787 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.573549 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.594488 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.637783 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.697399 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.713573 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.803485 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.807510 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.816503 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 30 16:59:33 crc kubenswrapper[4712]: I0130 16:59:33.853736 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.095362 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.158835 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.177075 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.233769 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.240077 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.403747 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.482753 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.487769 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.506223 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.568583 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.577782 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.599048 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.653429 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.827895 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.856559 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.884476 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.905145 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 30 16:59:34 crc kubenswrapper[4712]: I0130 16:59:34.937401 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.098599 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.155007 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.177427 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.293982 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.510969 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.886937 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 30 16:59:35 crc kubenswrapper[4712]: I0130 16:59:35.993996 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 30 16:59:36 crc kubenswrapper[4712]: I0130 16:59:36.224987 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 30 16:59:36 crc kubenswrapper[4712]: I0130 16:59:36.246751 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 30 16:59:36 crc kubenswrapper[4712]: I0130 16:59:36.458949 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 30 16:59:36 crc kubenswrapper[4712]: I0130 16:59:36.611587 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 30 16:59:36 crc kubenswrapper[4712]: I0130 16:59:36.947988 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.354899 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.355248 4712 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="31a87cbc364daa9be2641b2aeb2682665571bce727fd470fb55b88b92b119014" exitCode=137 Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.434044 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.794150 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.794232 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.812506 4712 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.830763 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.830813 4712 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e607bd97-23f6-4e6d-8790-b57367eab773" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.838053 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.838118 4712 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="e607bd97-23f6-4e6d-8790-b57367eab773" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.948500 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.975600 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976009 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976203 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976357 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976495 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.975714 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976121 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976247 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.976627 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.977270 4712 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.977421 4712 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.977518 4712 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.977616 4712 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:37 crc kubenswrapper[4712]: I0130 16:59:37.987394 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 16:59:38 crc kubenswrapper[4712]: I0130 16:59:38.070607 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 30 16:59:38 crc kubenswrapper[4712]: I0130 16:59:38.078916 4712 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 30 16:59:38 crc kubenswrapper[4712]: I0130 16:59:38.362537 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 30 16:59:38 crc kubenswrapper[4712]: I0130 16:59:38.362645 4712 scope.go:117] "RemoveContainer" containerID="31a87cbc364daa9be2641b2aeb2682665571bce727fd470fb55b88b92b119014" Jan 30 16:59:38 crc kubenswrapper[4712]: I0130 16:59:38.362714 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 30 16:59:39 crc kubenswrapper[4712]: I0130 16:59:39.808487 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 30 16:59:53 crc kubenswrapper[4712]: I0130 16:59:53.476890 4712 generic.go:334] "Generic (PLEG): container finished" podID="c9e01529-72ef-487b-ac85-e90905240355" containerID="2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d" exitCode=0 Jan 30 16:59:53 crc kubenswrapper[4712]: I0130 16:59:53.477063 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" event={"ID":"c9e01529-72ef-487b-ac85-e90905240355","Type":"ContainerDied","Data":"2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d"} Jan 30 16:59:53 crc kubenswrapper[4712]: I0130 16:59:53.478593 4712 scope.go:117] "RemoveContainer" containerID="2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d" Jan 30 16:59:54 crc kubenswrapper[4712]: I0130 16:59:54.484242 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" event={"ID":"c9e01529-72ef-487b-ac85-e90905240355","Type":"ContainerStarted","Data":"a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50"} Jan 30 16:59:54 crc kubenswrapper[4712]: I0130 16:59:54.484859 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 16:59:54 crc kubenswrapper[4712]: I0130 16:59:54.488376 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.177596 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn"] Jan 30 17:00:00 crc kubenswrapper[4712]: E0130 17:00:00.179306 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.179410 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 17:00:00 crc kubenswrapper[4712]: E0130 17:00:00.179490 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" containerName="installer" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.179566 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" containerName="installer" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.179763 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.179922 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cd49b09-e90f-4bfd-b4a0-357240cac04d" containerName="installer" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.180423 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.183543 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.194220 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.199571 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn"] Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.269305 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ca2603d-40c8-4dc1-bc32-c4d549a66184-secret-volume\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.269432 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ca2603d-40c8-4dc1-bc32-c4d549a66184-config-volume\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.269470 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg7bq\" (UniqueName: \"kubernetes.io/projected/8ca2603d-40c8-4dc1-bc32-c4d549a66184-kube-api-access-tg7bq\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.370371 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ca2603d-40c8-4dc1-bc32-c4d549a66184-secret-volume\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.370464 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ca2603d-40c8-4dc1-bc32-c4d549a66184-config-volume\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.370492 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg7bq\" (UniqueName: \"kubernetes.io/projected/8ca2603d-40c8-4dc1-bc32-c4d549a66184-kube-api-access-tg7bq\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.371480 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ca2603d-40c8-4dc1-bc32-c4d549a66184-config-volume\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.376890 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ca2603d-40c8-4dc1-bc32-c4d549a66184-secret-volume\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.394889 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg7bq\" (UniqueName: \"kubernetes.io/projected/8ca2603d-40c8-4dc1-bc32-c4d549a66184-kube-api-access-tg7bq\") pod \"collect-profiles-29496540-wsbpn\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.498041 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:00 crc kubenswrapper[4712]: I0130 17:00:00.937299 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn"] Jan 30 17:00:01 crc kubenswrapper[4712]: I0130 17:00:01.523897 4712 generic.go:334] "Generic (PLEG): container finished" podID="8ca2603d-40c8-4dc1-bc32-c4d549a66184" containerID="ba5effc54563181ee3852ad78379920b530a1e62bc07e724c849cc7e59b16add" exitCode=0 Jan 30 17:00:01 crc kubenswrapper[4712]: I0130 17:00:01.524189 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" event={"ID":"8ca2603d-40c8-4dc1-bc32-c4d549a66184","Type":"ContainerDied","Data":"ba5effc54563181ee3852ad78379920b530a1e62bc07e724c849cc7e59b16add"} Jan 30 17:00:01 crc kubenswrapper[4712]: I0130 17:00:01.524233 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" event={"ID":"8ca2603d-40c8-4dc1-bc32-c4d549a66184","Type":"ContainerStarted","Data":"53c01651c4af74a3e4331d41f939cb9c5e0ce25b0a1611c53664cd3ab166a8e3"} Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.748120 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.901166 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ca2603d-40c8-4dc1-bc32-c4d549a66184-config-volume\") pod \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.901249 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg7bq\" (UniqueName: \"kubernetes.io/projected/8ca2603d-40c8-4dc1-bc32-c4d549a66184-kube-api-access-tg7bq\") pod \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.901317 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ca2603d-40c8-4dc1-bc32-c4d549a66184-secret-volume\") pod \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\" (UID: \"8ca2603d-40c8-4dc1-bc32-c4d549a66184\") " Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.902721 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ca2603d-40c8-4dc1-bc32-c4d549a66184-config-volume" (OuterVolumeSpecName: "config-volume") pod "8ca2603d-40c8-4dc1-bc32-c4d549a66184" (UID: "8ca2603d-40c8-4dc1-bc32-c4d549a66184"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.910349 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ca2603d-40c8-4dc1-bc32-c4d549a66184-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8ca2603d-40c8-4dc1-bc32-c4d549a66184" (UID: "8ca2603d-40c8-4dc1-bc32-c4d549a66184"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:02 crc kubenswrapper[4712]: I0130 17:00:02.923961 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ca2603d-40c8-4dc1-bc32-c4d549a66184-kube-api-access-tg7bq" (OuterVolumeSpecName: "kube-api-access-tg7bq") pod "8ca2603d-40c8-4dc1-bc32-c4d549a66184" (UID: "8ca2603d-40c8-4dc1-bc32-c4d549a66184"). InnerVolumeSpecName "kube-api-access-tg7bq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:03 crc kubenswrapper[4712]: I0130 17:00:03.003387 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg7bq\" (UniqueName: \"kubernetes.io/projected/8ca2603d-40c8-4dc1-bc32-c4d549a66184-kube-api-access-tg7bq\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4712]: I0130 17:00:03.003439 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8ca2603d-40c8-4dc1-bc32-c4d549a66184-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4712]: I0130 17:00:03.003459 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8ca2603d-40c8-4dc1-bc32-c4d549a66184-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:03 crc kubenswrapper[4712]: I0130 17:00:03.536839 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" event={"ID":"8ca2603d-40c8-4dc1-bc32-c4d549a66184","Type":"ContainerDied","Data":"53c01651c4af74a3e4331d41f939cb9c5e0ce25b0a1611c53664cd3ab166a8e3"} Jan 30 17:00:03 crc kubenswrapper[4712]: I0130 17:00:03.537112 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53c01651c4af74a3e4331d41f939cb9c5e0ce25b0a1611c53664cd3ab166a8e3" Jan 30 17:00:03 crc kubenswrapper[4712]: I0130 17:00:03.536895 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.386467 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m96vb"] Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.387063 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" podUID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" containerName="controller-manager" containerID="cri-o://00eab9e34ecc007170db0d0e33bf5325bbfa75bdb29c4e7a6e09013caf180b29" gracePeriod=30 Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.497655 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt"] Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.497926 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" podUID="b34e60ff-e00e-485a-b7e0-1dded6c68091" containerName="route-controller-manager" containerID="cri-o://53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972" gracePeriod=30 Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.543327 4712 generic.go:334] "Generic (PLEG): container finished" podID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" containerID="00eab9e34ecc007170db0d0e33bf5325bbfa75bdb29c4e7a6e09013caf180b29" exitCode=0 Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.543560 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" event={"ID":"69f69514-00d4-42fd-b010-2b6e4bc7b2fe","Type":"ContainerDied","Data":"00eab9e34ecc007170db0d0e33bf5325bbfa75bdb29c4e7a6e09013caf180b29"} Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.839039 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.871662 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.929519 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-config\") pod \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.929580 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8zfw\" (UniqueName: \"kubernetes.io/projected/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-kube-api-access-r8zfw\") pod \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.929642 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-proxy-ca-bundles\") pod \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.929688 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-serving-cert\") pod \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.929714 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-client-ca\") pod \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\" (UID: \"69f69514-00d4-42fd-b010-2b6e4bc7b2fe\") " Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.931839 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-config" (OuterVolumeSpecName: "config") pod "69f69514-00d4-42fd-b010-2b6e4bc7b2fe" (UID: "69f69514-00d4-42fd-b010-2b6e4bc7b2fe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.931993 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-client-ca" (OuterVolumeSpecName: "client-ca") pod "69f69514-00d4-42fd-b010-2b6e4bc7b2fe" (UID: "69f69514-00d4-42fd-b010-2b6e4bc7b2fe"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.932124 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "69f69514-00d4-42fd-b010-2b6e4bc7b2fe" (UID: "69f69514-00d4-42fd-b010-2b6e4bc7b2fe"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.941388 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "69f69514-00d4-42fd-b010-2b6e4bc7b2fe" (UID: "69f69514-00d4-42fd-b010-2b6e4bc7b2fe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:04 crc kubenswrapper[4712]: I0130 17:00:04.941610 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-kube-api-access-r8zfw" (OuterVolumeSpecName: "kube-api-access-r8zfw") pod "69f69514-00d4-42fd-b010-2b6e4bc7b2fe" (UID: "69f69514-00d4-42fd-b010-2b6e4bc7b2fe"). InnerVolumeSpecName "kube-api-access-r8zfw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.030990 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e60ff-e00e-485a-b7e0-1dded6c68091-serving-cert\") pod \"b34e60ff-e00e-485a-b7e0-1dded6c68091\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031148 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-config\") pod \"b34e60ff-e00e-485a-b7e0-1dded6c68091\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031237 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-client-ca\") pod \"b34e60ff-e00e-485a-b7e0-1dded6c68091\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031262 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghsjc\" (UniqueName: \"kubernetes.io/projected/b34e60ff-e00e-485a-b7e0-1dded6c68091-kube-api-access-ghsjc\") pod \"b34e60ff-e00e-485a-b7e0-1dded6c68091\" (UID: \"b34e60ff-e00e-485a-b7e0-1dded6c68091\") " Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031448 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031460 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031468 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031476 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.031485 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8zfw\" (UniqueName: \"kubernetes.io/projected/69f69514-00d4-42fd-b010-2b6e4bc7b2fe-kube-api-access-r8zfw\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.032631 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-client-ca" (OuterVolumeSpecName: "client-ca") pod "b34e60ff-e00e-485a-b7e0-1dded6c68091" (UID: "b34e60ff-e00e-485a-b7e0-1dded6c68091"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.032717 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-config" (OuterVolumeSpecName: "config") pod "b34e60ff-e00e-485a-b7e0-1dded6c68091" (UID: "b34e60ff-e00e-485a-b7e0-1dded6c68091"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.034814 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b34e60ff-e00e-485a-b7e0-1dded6c68091-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b34e60ff-e00e-485a-b7e0-1dded6c68091" (UID: "b34e60ff-e00e-485a-b7e0-1dded6c68091"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.035642 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b34e60ff-e00e-485a-b7e0-1dded6c68091-kube-api-access-ghsjc" (OuterVolumeSpecName: "kube-api-access-ghsjc") pod "b34e60ff-e00e-485a-b7e0-1dded6c68091" (UID: "b34e60ff-e00e-485a-b7e0-1dded6c68091"). InnerVolumeSpecName "kube-api-access-ghsjc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.133342 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b34e60ff-e00e-485a-b7e0-1dded6c68091-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.133403 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.133421 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b34e60ff-e00e-485a-b7e0-1dded6c68091-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.133439 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghsjc\" (UniqueName: \"kubernetes.io/projected/b34e60ff-e00e-485a-b7e0-1dded6c68091-kube-api-access-ghsjc\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.551224 4712 generic.go:334] "Generic (PLEG): container finished" podID="b34e60ff-e00e-485a-b7e0-1dded6c68091" containerID="53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972" exitCode=0 Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.551279 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.551349 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" event={"ID":"b34e60ff-e00e-485a-b7e0-1dded6c68091","Type":"ContainerDied","Data":"53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972"} Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.551403 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt" event={"ID":"b34e60ff-e00e-485a-b7e0-1dded6c68091","Type":"ContainerDied","Data":"55c8570b072853d8293cba09a2623036333c1069a40ae777b1244be2c25922e3"} Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.551438 4712 scope.go:117] "RemoveContainer" containerID="53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.553882 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" event={"ID":"69f69514-00d4-42fd-b010-2b6e4bc7b2fe","Type":"ContainerDied","Data":"5c31555924514a2320bb19fbe9f8ab227decea537868f7908832d7c4673cc5aa"} Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.553912 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-m96vb" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.574758 4712 scope.go:117] "RemoveContainer" containerID="53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972" Jan 30 17:00:05 crc kubenswrapper[4712]: E0130 17:00:05.575338 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972\": container with ID starting with 53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972 not found: ID does not exist" containerID="53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.575383 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972"} err="failed to get container status \"53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972\": rpc error: code = NotFound desc = could not find container \"53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972\": container with ID starting with 53b909d5b100ed04291d275d4cce945258919fbe73b2bcfe76ab624cc5eb1972 not found: ID does not exist" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.575410 4712 scope.go:117] "RemoveContainer" containerID="00eab9e34ecc007170db0d0e33bf5325bbfa75bdb29c4e7a6e09013caf180b29" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.586508 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt"] Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.596260 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-tffxt"] Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.611875 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m96vb"] Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.614383 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-m96vb"] Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.704263 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-598cb7b6db-zbcpx"] Jan 30 17:00:05 crc kubenswrapper[4712]: E0130 17:00:05.704754 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ca2603d-40c8-4dc1-bc32-c4d549a66184" containerName="collect-profiles" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.704919 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ca2603d-40c8-4dc1-bc32-c4d549a66184" containerName="collect-profiles" Jan 30 17:00:05 crc kubenswrapper[4712]: E0130 17:00:05.705020 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" containerName="controller-manager" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.705096 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" containerName="controller-manager" Jan 30 17:00:05 crc kubenswrapper[4712]: E0130 17:00:05.705161 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b34e60ff-e00e-485a-b7e0-1dded6c68091" containerName="route-controller-manager" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.705213 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b34e60ff-e00e-485a-b7e0-1dded6c68091" containerName="route-controller-manager" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.705385 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ca2603d-40c8-4dc1-bc32-c4d549a66184" containerName="collect-profiles" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.705453 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" containerName="controller-manager" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.705534 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b34e60ff-e00e-485a-b7e0-1dded6c68091" containerName="route-controller-manager" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.706089 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.708030 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.710566 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.710597 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.714207 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.714443 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.714272 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.719224 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598cb7b6db-zbcpx"] Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.723489 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.805680 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69f69514-00d4-42fd-b010-2b6e4bc7b2fe" path="/var/lib/kubelet/pods/69f69514-00d4-42fd-b010-2b6e4bc7b2fe/volumes" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.806177 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b34e60ff-e00e-485a-b7e0-1dded6c68091" path="/var/lib/kubelet/pods/b34e60ff-e00e-485a-b7e0-1dded6c68091/volumes" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.841561 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-client-ca\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.841601 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f459a6-9981-442e-994d-df2bcbd124b1-serving-cert\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.841641 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-proxy-ca-bundles\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.841675 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-config\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.841701 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gr8\" (UniqueName: \"kubernetes.io/projected/b2f459a6-9981-442e-994d-df2bcbd124b1-kube-api-access-l2gr8\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.943362 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-proxy-ca-bundles\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.943613 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-config\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.943691 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l2gr8\" (UniqueName: \"kubernetes.io/projected/b2f459a6-9981-442e-994d-df2bcbd124b1-kube-api-access-l2gr8\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.943782 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-client-ca\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.943911 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f459a6-9981-442e-994d-df2bcbd124b1-serving-cert\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.944938 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-proxy-ca-bundles\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.945114 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-client-ca\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.945482 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-config\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.951610 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f459a6-9981-442e-994d-df2bcbd124b1-serving-cert\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:05 crc kubenswrapper[4712]: I0130 17:00:05.969559 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l2gr8\" (UniqueName: \"kubernetes.io/projected/b2f459a6-9981-442e-994d-df2bcbd124b1-kube-api-access-l2gr8\") pod \"controller-manager-598cb7b6db-zbcpx\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.021016 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.247633 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-598cb7b6db-zbcpx"] Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.560619 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" event={"ID":"b2f459a6-9981-442e-994d-df2bcbd124b1","Type":"ContainerStarted","Data":"9007848e6137096551f7522344b5756fb56a55ff9a9a52575bbc4e8a264286c6"} Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.561017 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" event={"ID":"b2f459a6-9981-442e-994d-df2bcbd124b1","Type":"ContainerStarted","Data":"8074f569f7ec0add8fe065ae27c449351ca02acd962a9d9e21a4859cdab1bde6"} Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.561042 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.588091 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" podStartSLOduration=2.588068527 podStartE2EDuration="2.588068527s" podCreationTimestamp="2026-01-30 17:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:06.584640883 +0000 UTC m=+343.491650352" watchObservedRunningTime="2026-01-30 17:00:06.588068527 +0000 UTC m=+343.495077996" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.588759 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.704314 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh"] Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.705123 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.707186 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.707853 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.708389 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.708561 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.708780 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.709073 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.722321 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh"] Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.859482 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-client-ca\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.859533 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb54t\" (UniqueName: \"kubernetes.io/projected/c574ca7c-3bee-4490-976f-294c58888b12-kube-api-access-cb54t\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.859595 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-config\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.859628 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c574ca7c-3bee-4490-976f-294c58888b12-serving-cert\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.960920 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c574ca7c-3bee-4490-976f-294c58888b12-serving-cert\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.961258 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-client-ca\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.961410 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb54t\" (UniqueName: \"kubernetes.io/projected/c574ca7c-3bee-4490-976f-294c58888b12-kube-api-access-cb54t\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.961609 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-config\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.962230 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-client-ca\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.962572 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-config\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.965714 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c574ca7c-3bee-4490-976f-294c58888b12-serving-cert\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:06 crc kubenswrapper[4712]: I0130 17:00:06.981740 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb54t\" (UniqueName: \"kubernetes.io/projected/c574ca7c-3bee-4490-976f-294c58888b12-kube-api-access-cb54t\") pod \"route-controller-manager-7dbdff7664-nrcmh\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:07 crc kubenswrapper[4712]: I0130 17:00:07.019871 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:07 crc kubenswrapper[4712]: I0130 17:00:07.230521 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh"] Jan 30 17:00:07 crc kubenswrapper[4712]: W0130 17:00:07.247448 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc574ca7c_3bee_4490_976f_294c58888b12.slice/crio-3fb11c4fe699ffe5667cf3a41d366ebe5aa659840843a10525aac67ce67c5260 WatchSource:0}: Error finding container 3fb11c4fe699ffe5667cf3a41d366ebe5aa659840843a10525aac67ce67c5260: Status 404 returned error can't find the container with id 3fb11c4fe699ffe5667cf3a41d366ebe5aa659840843a10525aac67ce67c5260 Jan 30 17:00:07 crc kubenswrapper[4712]: I0130 17:00:07.570617 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" event={"ID":"c574ca7c-3bee-4490-976f-294c58888b12","Type":"ContainerStarted","Data":"28c83b09a83ce4723092d752e87347fee0a12c61f878296dedb981f48a584bf5"} Jan 30 17:00:07 crc kubenswrapper[4712]: I0130 17:00:07.571031 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" event={"ID":"c574ca7c-3bee-4490-976f-294c58888b12","Type":"ContainerStarted","Data":"3fb11c4fe699ffe5667cf3a41d366ebe5aa659840843a10525aac67ce67c5260"} Jan 30 17:00:07 crc kubenswrapper[4712]: I0130 17:00:07.588862 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" podStartSLOduration=3.588842951 podStartE2EDuration="3.588842951s" podCreationTimestamp="2026-01-30 17:00:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:07.586106363 +0000 UTC m=+344.493115832" watchObservedRunningTime="2026-01-30 17:00:07.588842951 +0000 UTC m=+344.495852420" Jan 30 17:00:08 crc kubenswrapper[4712]: I0130 17:00:08.577335 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:08 crc kubenswrapper[4712]: I0130 17:00:08.582560 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.383413 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598cb7b6db-zbcpx"] Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.384107 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" podUID="b2f459a6-9981-442e-994d-df2bcbd124b1" containerName="controller-manager" containerID="cri-o://9007848e6137096551f7522344b5756fb56a55ff9a9a52575bbc4e8a264286c6" gracePeriod=30 Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.388634 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh"] Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.388905 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" podUID="c574ca7c-3bee-4490-976f-294c58888b12" containerName="route-controller-manager" containerID="cri-o://28c83b09a83ce4723092d752e87347fee0a12c61f878296dedb981f48a584bf5" gracePeriod=30 Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.675124 4712 generic.go:334] "Generic (PLEG): container finished" podID="c574ca7c-3bee-4490-976f-294c58888b12" containerID="28c83b09a83ce4723092d752e87347fee0a12c61f878296dedb981f48a584bf5" exitCode=0 Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.675227 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" event={"ID":"c574ca7c-3bee-4490-976f-294c58888b12","Type":"ContainerDied","Data":"28c83b09a83ce4723092d752e87347fee0a12c61f878296dedb981f48a584bf5"} Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.677630 4712 generic.go:334] "Generic (PLEG): container finished" podID="b2f459a6-9981-442e-994d-df2bcbd124b1" containerID="9007848e6137096551f7522344b5756fb56a55ff9a9a52575bbc4e8a264286c6" exitCode=0 Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.677692 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" event={"ID":"b2f459a6-9981-442e-994d-df2bcbd124b1","Type":"ContainerDied","Data":"9007848e6137096551f7522344b5756fb56a55ff9a9a52575bbc4e8a264286c6"} Jan 30 17:00:24 crc kubenswrapper[4712]: I0130 17:00:24.866739 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.015026 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c574ca7c-3bee-4490-976f-294c58888b12-serving-cert\") pod \"c574ca7c-3bee-4490-976f-294c58888b12\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.015082 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb54t\" (UniqueName: \"kubernetes.io/projected/c574ca7c-3bee-4490-976f-294c58888b12-kube-api-access-cb54t\") pod \"c574ca7c-3bee-4490-976f-294c58888b12\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.015149 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-config\") pod \"c574ca7c-3bee-4490-976f-294c58888b12\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.015194 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-client-ca\") pod \"c574ca7c-3bee-4490-976f-294c58888b12\" (UID: \"c574ca7c-3bee-4490-976f-294c58888b12\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.016058 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-client-ca" (OuterVolumeSpecName: "client-ca") pod "c574ca7c-3bee-4490-976f-294c58888b12" (UID: "c574ca7c-3bee-4490-976f-294c58888b12"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.016674 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-config" (OuterVolumeSpecName: "config") pod "c574ca7c-3bee-4490-976f-294c58888b12" (UID: "c574ca7c-3bee-4490-976f-294c58888b12"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.020902 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.021623 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c574ca7c-3bee-4490-976f-294c58888b12-kube-api-access-cb54t" (OuterVolumeSpecName: "kube-api-access-cb54t") pod "c574ca7c-3bee-4490-976f-294c58888b12" (UID: "c574ca7c-3bee-4490-976f-294c58888b12"). InnerVolumeSpecName "kube-api-access-cb54t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.021638 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c574ca7c-3bee-4490-976f-294c58888b12-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c574ca7c-3bee-4490-976f-294c58888b12" (UID: "c574ca7c-3bee-4490-976f-294c58888b12"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.116442 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f459a6-9981-442e-994d-df2bcbd124b1-serving-cert\") pod \"b2f459a6-9981-442e-994d-df2bcbd124b1\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.116527 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-proxy-ca-bundles\") pod \"b2f459a6-9981-442e-994d-df2bcbd124b1\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.116558 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-client-ca\") pod \"b2f459a6-9981-442e-994d-df2bcbd124b1\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.116592 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2gr8\" (UniqueName: \"kubernetes.io/projected/b2f459a6-9981-442e-994d-df2bcbd124b1-kube-api-access-l2gr8\") pod \"b2f459a6-9981-442e-994d-df2bcbd124b1\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.116715 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-config\") pod \"b2f459a6-9981-442e-994d-df2bcbd124b1\" (UID: \"b2f459a6-9981-442e-994d-df2bcbd124b1\") " Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.116986 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb54t\" (UniqueName: \"kubernetes.io/projected/c574ca7c-3bee-4490-976f-294c58888b12-kube-api-access-cb54t\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.117004 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.117016 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c574ca7c-3bee-4490-976f-294c58888b12-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.117028 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c574ca7c-3bee-4490-976f-294c58888b12-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.117621 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-client-ca" (OuterVolumeSpecName: "client-ca") pod "b2f459a6-9981-442e-994d-df2bcbd124b1" (UID: "b2f459a6-9981-442e-994d-df2bcbd124b1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.118095 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b2f459a6-9981-442e-994d-df2bcbd124b1" (UID: "b2f459a6-9981-442e-994d-df2bcbd124b1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.118180 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-config" (OuterVolumeSpecName: "config") pod "b2f459a6-9981-442e-994d-df2bcbd124b1" (UID: "b2f459a6-9981-442e-994d-df2bcbd124b1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.120226 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2f459a6-9981-442e-994d-df2bcbd124b1-kube-api-access-l2gr8" (OuterVolumeSpecName: "kube-api-access-l2gr8") pod "b2f459a6-9981-442e-994d-df2bcbd124b1" (UID: "b2f459a6-9981-442e-994d-df2bcbd124b1"). InnerVolumeSpecName "kube-api-access-l2gr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.120859 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b2f459a6-9981-442e-994d-df2bcbd124b1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b2f459a6-9981-442e-994d-df2bcbd124b1" (UID: "b2f459a6-9981-442e-994d-df2bcbd124b1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.218132 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.218505 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2f459a6-9981-442e-994d-df2bcbd124b1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.218530 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.218557 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b2f459a6-9981-442e-994d-df2bcbd124b1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.218585 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l2gr8\" (UniqueName: \"kubernetes.io/projected/b2f459a6-9981-442e-994d-df2bcbd124b1-kube-api-access-l2gr8\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.684453 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" event={"ID":"c574ca7c-3bee-4490-976f-294c58888b12","Type":"ContainerDied","Data":"3fb11c4fe699ffe5667cf3a41d366ebe5aa659840843a10525aac67ce67c5260"} Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.685561 4712 scope.go:117] "RemoveContainer" containerID="28c83b09a83ce4723092d752e87347fee0a12c61f878296dedb981f48a584bf5" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.684519 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.688652 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" event={"ID":"b2f459a6-9981-442e-994d-df2bcbd124b1","Type":"ContainerDied","Data":"8074f569f7ec0add8fe065ae27c449351ca02acd962a9d9e21a4859cdab1bde6"} Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.688727 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-598cb7b6db-zbcpx" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.726602 4712 scope.go:117] "RemoveContainer" containerID="9007848e6137096551f7522344b5756fb56a55ff9a9a52575bbc4e8a264286c6" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.736922 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-598cb7b6db-zbcpx"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.751663 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-d4xn6"] Jan 30 17:00:25 crc kubenswrapper[4712]: E0130 17:00:25.751957 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c574ca7c-3bee-4490-976f-294c58888b12" containerName="route-controller-manager" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.751972 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c574ca7c-3bee-4490-976f-294c58888b12" containerName="route-controller-manager" Jan 30 17:00:25 crc kubenswrapper[4712]: E0130 17:00:25.751987 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2f459a6-9981-442e-994d-df2bcbd124b1" containerName="controller-manager" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.751994 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2f459a6-9981-442e-994d-df2bcbd124b1" containerName="controller-manager" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.752119 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c574ca7c-3bee-4490-976f-294c58888b12" containerName="route-controller-manager" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.752141 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2f459a6-9981-442e-994d-df2bcbd124b1" containerName="controller-manager" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.752633 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.758218 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.759081 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764089 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764171 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764320 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764406 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764656 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764678 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764824 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764865 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764871 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764948 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.764996 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.766857 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-598cb7b6db-zbcpx"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.772006 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.773249 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.777861 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.778866 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-d4xn6"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.782763 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dbdff7664-nrcmh"] Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.804272 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.816626 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2f459a6-9981-442e-994d-df2bcbd124b1" path="/var/lib/kubelet/pods/b2f459a6-9981-442e-994d-df2bcbd124b1/volumes" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.817224 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c574ca7c-3bee-4490-976f-294c58888b12" path="/var/lib/kubelet/pods/c574ca7c-3bee-4490-976f-294c58888b12/volumes" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.936789 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b178aab0-e591-4638-8415-bcf4638a6a21-serving-cert\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937006 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-serving-cert\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937084 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-client-ca\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937141 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-config\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937155 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-client-ca\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937219 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmvz9\" (UniqueName: \"kubernetes.io/projected/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-kube-api-access-tmvz9\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937267 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-proxy-ca-bundles\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937286 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-config\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:25 crc kubenswrapper[4712]: I0130 17:00:25.937318 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr79z\" (UniqueName: \"kubernetes.io/projected/b178aab0-e591-4638-8415-bcf4638a6a21-kube-api-access-dr79z\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.038835 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmvz9\" (UniqueName: \"kubernetes.io/projected/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-kube-api-access-tmvz9\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.038890 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-proxy-ca-bundles\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.038913 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-config\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.038942 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr79z\" (UniqueName: \"kubernetes.io/projected/b178aab0-e591-4638-8415-bcf4638a6a21-kube-api-access-dr79z\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.038964 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b178aab0-e591-4638-8415-bcf4638a6a21-serving-cert\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.038987 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-serving-cert\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.040294 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-client-ca\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.040336 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-config\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.040351 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-client-ca\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.040422 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-config\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.040057 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-proxy-ca-bundles\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.041173 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-client-ca\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.044078 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b178aab0-e591-4638-8415-bcf4638a6a21-serving-cert\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.049694 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-serving-cert\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.053205 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-client-ca\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.056600 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-config\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.059465 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmvz9\" (UniqueName: \"kubernetes.io/projected/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-kube-api-access-tmvz9\") pod \"route-controller-manager-cc4566cb7-hdbqk\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.071639 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr79z\" (UniqueName: \"kubernetes.io/projected/b178aab0-e591-4638-8415-bcf4638a6a21-kube-api-access-dr79z\") pod \"controller-manager-66b89bc4-d4xn6\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.085715 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.112568 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.497366 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-d4xn6"] Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.556337 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk"] Jan 30 17:00:26 crc kubenswrapper[4712]: W0130 17:00:26.561029 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0bac5fb5_2cdd_4133_bea1_1fa0e112c3a9.slice/crio-e497ee55805bc3c695f39bf8ede5ffcd982c098047a02d070000cd9931665b9d WatchSource:0}: Error finding container e497ee55805bc3c695f39bf8ede5ffcd982c098047a02d070000cd9931665b9d: Status 404 returned error can't find the container with id e497ee55805bc3c695f39bf8ede5ffcd982c098047a02d070000cd9931665b9d Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.696664 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" event={"ID":"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9","Type":"ContainerStarted","Data":"e497ee55805bc3c695f39bf8ede5ffcd982c098047a02d070000cd9931665b9d"} Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.700661 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" event={"ID":"b178aab0-e591-4638-8415-bcf4638a6a21","Type":"ContainerStarted","Data":"6af00d7f2c115b1a7deef29373a37251f70f5c8f0ee8cc034125fb4b6c72ae53"} Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.700709 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" event={"ID":"b178aab0-e591-4638-8415-bcf4638a6a21","Type":"ContainerStarted","Data":"d33f81eeabb0964f5f67d64c6e132b4789b305ab613266a9b29748c54cff4f4c"} Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.701781 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.703965 4712 patch_prober.go:28] interesting pod/controller-manager-66b89bc4-d4xn6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.704004 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" podUID="b178aab0-e591-4638-8415-bcf4638a6a21" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 30 17:00:26 crc kubenswrapper[4712]: I0130 17:00:26.721928 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" podStartSLOduration=2.721909726 podStartE2EDuration="2.721909726s" podCreationTimestamp="2026-01-30 17:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:26.717975726 +0000 UTC m=+363.624985215" watchObservedRunningTime="2026-01-30 17:00:26.721909726 +0000 UTC m=+363.628919195" Jan 30 17:00:27 crc kubenswrapper[4712]: I0130 17:00:27.709987 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" event={"ID":"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9","Type":"ContainerStarted","Data":"0ac3b7cd53e125bf6446c9760f238a54e3eb3a6c0e6f3fd7bbad3af493b00746"} Jan 30 17:00:27 crc kubenswrapper[4712]: I0130 17:00:27.716218 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:27 crc kubenswrapper[4712]: I0130 17:00:27.730043 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" podStartSLOduration=3.730024184 podStartE2EDuration="3.730024184s" podCreationTimestamp="2026-01-30 17:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:27.728243166 +0000 UTC m=+364.635252635" watchObservedRunningTime="2026-01-30 17:00:27.730024184 +0000 UTC m=+364.637033653" Jan 30 17:00:28 crc kubenswrapper[4712]: I0130 17:00:28.715117 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:28 crc kubenswrapper[4712]: I0130 17:00:28.721360 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:36 crc kubenswrapper[4712]: I0130 17:00:36.270869 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:00:36 crc kubenswrapper[4712]: I0130 17:00:36.271450 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.361379 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t6xlq"] Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.485302 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-d4xn6"] Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.485539 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" podUID="b178aab0-e591-4638-8415-bcf4638a6a21" containerName="controller-manager" containerID="cri-o://6af00d7f2c115b1a7deef29373a37251f70f5c8f0ee8cc034125fb4b6c72ae53" gracePeriod=30 Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.593651 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk"] Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.593906 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" podUID="0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" containerName="route-controller-manager" containerID="cri-o://0ac3b7cd53e125bf6446c9760f238a54e3eb3a6c0e6f3fd7bbad3af493b00746" gracePeriod=30 Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.824736 4712 generic.go:334] "Generic (PLEG): container finished" podID="b178aab0-e591-4638-8415-bcf4638a6a21" containerID="6af00d7f2c115b1a7deef29373a37251f70f5c8f0ee8cc034125fb4b6c72ae53" exitCode=0 Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.825012 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" event={"ID":"b178aab0-e591-4638-8415-bcf4638a6a21","Type":"ContainerDied","Data":"6af00d7f2c115b1a7deef29373a37251f70f5c8f0ee8cc034125fb4b6c72ae53"} Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.828133 4712 generic.go:334] "Generic (PLEG): container finished" podID="0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" containerID="0ac3b7cd53e125bf6446c9760f238a54e3eb3a6c0e6f3fd7bbad3af493b00746" exitCode=0 Jan 30 17:00:44 crc kubenswrapper[4712]: I0130 17:00:44.828181 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" event={"ID":"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9","Type":"ContainerDied","Data":"0ac3b7cd53e125bf6446c9760f238a54e3eb3a6c0e6f3fd7bbad3af493b00746"} Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.050199 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.056897 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.094783 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-config\") pod \"b178aab0-e591-4638-8415-bcf4638a6a21\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.094891 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-serving-cert\") pod \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.094929 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b178aab0-e591-4638-8415-bcf4638a6a21-serving-cert\") pod \"b178aab0-e591-4638-8415-bcf4638a6a21\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.094986 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-proxy-ca-bundles\") pod \"b178aab0-e591-4638-8415-bcf4638a6a21\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.095038 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmvz9\" (UniqueName: \"kubernetes.io/projected/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-kube-api-access-tmvz9\") pod \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.095084 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-client-ca\") pod \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.095116 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-config\") pod \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\" (UID: \"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.095150 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-client-ca\") pod \"b178aab0-e591-4638-8415-bcf4638a6a21\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.095179 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dr79z\" (UniqueName: \"kubernetes.io/projected/b178aab0-e591-4638-8415-bcf4638a6a21-kube-api-access-dr79z\") pod \"b178aab0-e591-4638-8415-bcf4638a6a21\" (UID: \"b178aab0-e591-4638-8415-bcf4638a6a21\") " Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.098482 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-client-ca" (OuterVolumeSpecName: "client-ca") pod "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" (UID: "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.099100 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-config" (OuterVolumeSpecName: "config") pod "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" (UID: "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.099460 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-config" (OuterVolumeSpecName: "config") pod "b178aab0-e591-4638-8415-bcf4638a6a21" (UID: "b178aab0-e591-4638-8415-bcf4638a6a21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.100579 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-client-ca" (OuterVolumeSpecName: "client-ca") pod "b178aab0-e591-4638-8415-bcf4638a6a21" (UID: "b178aab0-e591-4638-8415-bcf4638a6a21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.101939 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" (UID: "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.103288 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-kube-api-access-tmvz9" (OuterVolumeSpecName: "kube-api-access-tmvz9") pod "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" (UID: "0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9"). InnerVolumeSpecName "kube-api-access-tmvz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.104597 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b178aab0-e591-4638-8415-bcf4638a6a21" (UID: "b178aab0-e591-4638-8415-bcf4638a6a21"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.115092 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b178aab0-e591-4638-8415-bcf4638a6a21-kube-api-access-dr79z" (OuterVolumeSpecName: "kube-api-access-dr79z") pod "b178aab0-e591-4638-8415-bcf4638a6a21" (UID: "b178aab0-e591-4638-8415-bcf4638a6a21"). InnerVolumeSpecName "kube-api-access-dr79z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.115466 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b178aab0-e591-4638-8415-bcf4638a6a21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b178aab0-e591-4638-8415-bcf4638a6a21" (UID: "b178aab0-e591-4638-8415-bcf4638a6a21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196418 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196458 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dr79z\" (UniqueName: \"kubernetes.io/projected/b178aab0-e591-4638-8415-bcf4638a6a21-kube-api-access-dr79z\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196473 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196487 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196498 4712 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b178aab0-e591-4638-8415-bcf4638a6a21-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196511 4712 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b178aab0-e591-4638-8415-bcf4638a6a21-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196523 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmvz9\" (UniqueName: \"kubernetes.io/projected/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-kube-api-access-tmvz9\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196534 4712 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.196546 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.744251 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq"] Jan 30 17:00:45 crc kubenswrapper[4712]: E0130 17:00:45.744651 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" containerName="route-controller-manager" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.744668 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" containerName="route-controller-manager" Jan 30 17:00:45 crc kubenswrapper[4712]: E0130 17:00:45.744679 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b178aab0-e591-4638-8415-bcf4638a6a21" containerName="controller-manager" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.744687 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b178aab0-e591-4638-8415-bcf4638a6a21" containerName="controller-manager" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.744858 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" containerName="route-controller-manager" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.744883 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b178aab0-e591-4638-8415-bcf4638a6a21" containerName="controller-manager" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.745436 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.751507 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7854896cc8-wc7q4"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.752600 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.768011 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.774464 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7854896cc8-wc7q4"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.835436 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.835427 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk" event={"ID":"0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9","Type":"ContainerDied","Data":"e497ee55805bc3c695f39bf8ede5ffcd982c098047a02d070000cd9931665b9d"} Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.835676 4712 scope.go:117] "RemoveContainer" containerID="0ac3b7cd53e125bf6446c9760f238a54e3eb3a6c0e6f3fd7bbad3af493b00746" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.839699 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" event={"ID":"b178aab0-e591-4638-8415-bcf4638a6a21","Type":"ContainerDied","Data":"d33f81eeabb0964f5f67d64c6e132b4789b305ab613266a9b29748c54cff4f4c"} Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.839932 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-66b89bc4-d4xn6" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.856401 4712 scope.go:117] "RemoveContainer" containerID="6af00d7f2c115b1a7deef29373a37251f70f5c8f0ee8cc034125fb4b6c72ae53" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.870345 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-d4xn6"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.876089 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-66b89bc4-d4xn6"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.882908 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.888351 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cc4566cb7-hdbqk"] Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.936869 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18f1f168-60eb-4666-9d2f-7455021a946c-serving-cert\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.937060 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr74p\" (UniqueName: \"kubernetes.io/projected/18f1f168-60eb-4666-9d2f-7455021a946c-kube-api-access-wr74p\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.937153 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18f1f168-60eb-4666-9d2f-7455021a946c-config\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.937211 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-config\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.937236 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-client-ca\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.937645 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18f1f168-60eb-4666-9d2f-7455021a946c-client-ca\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.937823 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-proxy-ca-bundles\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.938021 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48377da3-e59b-4d8e-96df-e71697486469-serving-cert\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:45 crc kubenswrapper[4712]: I0130 17:00:45.938127 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vzb2\" (UniqueName: \"kubernetes.io/projected/48377da3-e59b-4d8e-96df-e71697486469-kube-api-access-7vzb2\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044104 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wr74p\" (UniqueName: \"kubernetes.io/projected/18f1f168-60eb-4666-9d2f-7455021a946c-kube-api-access-wr74p\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044158 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18f1f168-60eb-4666-9d2f-7455021a946c-config\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044185 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-config\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044206 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-client-ca\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044251 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18f1f168-60eb-4666-9d2f-7455021a946c-client-ca\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044281 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-proxy-ca-bundles\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044507 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48377da3-e59b-4d8e-96df-e71697486469-serving-cert\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044537 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vzb2\" (UniqueName: \"kubernetes.io/projected/48377da3-e59b-4d8e-96df-e71697486469-kube-api-access-7vzb2\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.044565 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18f1f168-60eb-4666-9d2f-7455021a946c-serving-cert\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.046118 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-client-ca\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.047281 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-proxy-ca-bundles\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.047502 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/18f1f168-60eb-4666-9d2f-7455021a946c-client-ca\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.048129 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/18f1f168-60eb-4666-9d2f-7455021a946c-config\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.049770 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/48377da3-e59b-4d8e-96df-e71697486469-config\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.053883 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/18f1f168-60eb-4666-9d2f-7455021a946c-serving-cert\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.067203 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/48377da3-e59b-4d8e-96df-e71697486469-serving-cert\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.077027 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wr74p\" (UniqueName: \"kubernetes.io/projected/18f1f168-60eb-4666-9d2f-7455021a946c-kube-api-access-wr74p\") pod \"route-controller-manager-7449c76d86-5ljsq\" (UID: \"18f1f168-60eb-4666-9d2f-7455021a946c\") " pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.077446 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vzb2\" (UniqueName: \"kubernetes.io/projected/48377da3-e59b-4d8e-96df-e71697486469-kube-api-access-7vzb2\") pod \"controller-manager-7854896cc8-wc7q4\" (UID: \"48377da3-e59b-4d8e-96df-e71697486469\") " pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.080505 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.278299 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7854896cc8-wc7q4"] Jan 30 17:00:46 crc kubenswrapper[4712]: W0130 17:00:46.286718 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod48377da3_e59b_4d8e_96df_e71697486469.slice/crio-ff8bd0652f9093520f3071cb78f8a260a2e79573954ef3c91fc1699834263c50 WatchSource:0}: Error finding container ff8bd0652f9093520f3071cb78f8a260a2e79573954ef3c91fc1699834263c50: Status 404 returned error can't find the container with id ff8bd0652f9093520f3071cb78f8a260a2e79573954ef3c91fc1699834263c50 Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.365620 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.567763 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq"] Jan 30 17:00:46 crc kubenswrapper[4712]: W0130 17:00:46.574610 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod18f1f168_60eb_4666_9d2f_7455021a946c.slice/crio-bb988908614c006f09fef8f55a918e808143c90c3a4b982f6a73e6551afe13b9 WatchSource:0}: Error finding container bb988908614c006f09fef8f55a918e808143c90c3a4b982f6a73e6551afe13b9: Status 404 returned error can't find the container with id bb988908614c006f09fef8f55a918e808143c90c3a4b982f6a73e6551afe13b9 Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.846733 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" event={"ID":"18f1f168-60eb-4666-9d2f-7455021a946c","Type":"ContainerStarted","Data":"8f32cca356368e1d90f906c7b065989ca60b1ed76d2d68439d0e10e71b432710"} Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.846783 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" event={"ID":"18f1f168-60eb-4666-9d2f-7455021a946c","Type":"ContainerStarted","Data":"bb988908614c006f09fef8f55a918e808143c90c3a4b982f6a73e6551afe13b9"} Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.847229 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.848417 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" event={"ID":"48377da3-e59b-4d8e-96df-e71697486469","Type":"ContainerStarted","Data":"05f6854f90ffa10a27ff5351f9fa3c08a2daedb83745bb726fd7c092aaf91363"} Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.848441 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" event={"ID":"48377da3-e59b-4d8e-96df-e71697486469","Type":"ContainerStarted","Data":"ff8bd0652f9093520f3071cb78f8a260a2e79573954ef3c91fc1699834263c50"} Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.849018 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.853613 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.871488 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podStartSLOduration=2.871470583 podStartE2EDuration="2.871470583s" podCreationTimestamp="2026-01-30 17:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:46.87012401 +0000 UTC m=+383.777133479" watchObservedRunningTime="2026-01-30 17:00:46.871470583 +0000 UTC m=+383.778480052" Jan 30 17:00:46 crc kubenswrapper[4712]: I0130 17:00:46.895685 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podStartSLOduration=2.895668406 podStartE2EDuration="2.895668406s" podCreationTimestamp="2026-01-30 17:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:00:46.895651106 +0000 UTC m=+383.802660575" watchObservedRunningTime="2026-01-30 17:00:46.895668406 +0000 UTC m=+383.802677875" Jan 30 17:00:47 crc kubenswrapper[4712]: I0130 17:00:47.106609 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 17:00:47 crc kubenswrapper[4712]: I0130 17:00:47.809934 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9" path="/var/lib/kubelet/pods/0bac5fb5-2cdd-4133-bea1-1fa0e112c3a9/volumes" Jan 30 17:00:47 crc kubenswrapper[4712]: I0130 17:00:47.811181 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b178aab0-e591-4638-8415-bcf4638a6a21" path="/var/lib/kubelet/pods/b178aab0-e591-4638-8415-bcf4638a6a21/volumes" Jan 30 17:00:53 crc kubenswrapper[4712]: I0130 17:00:53.581877 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4f5w5"] Jan 30 17:00:53 crc kubenswrapper[4712]: I0130 17:00:53.582822 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4f5w5" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="registry-server" containerID="cri-o://c516827b18fc293b250a7445e45356829e42c68cec9d9c06f7b819553b51ac2d" gracePeriod=2 Jan 30 17:00:53 crc kubenswrapper[4712]: I0130 17:00:53.904592 4712 generic.go:334] "Generic (PLEG): container finished" podID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerID="c516827b18fc293b250a7445e45356829e42c68cec9d9c06f7b819553b51ac2d" exitCode=0 Jan 30 17:00:53 crc kubenswrapper[4712]: I0130 17:00:53.904758 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f5w5" event={"ID":"fda2fdd1-0c89-4398-8e0a-545311fe5ae9","Type":"ContainerDied","Data":"c516827b18fc293b250a7445e45356829e42c68cec9d9c06f7b819553b51ac2d"} Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.089110 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.257953 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rk9t6\" (UniqueName: \"kubernetes.io/projected/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-kube-api-access-rk9t6\") pod \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.258046 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-catalog-content\") pod \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.258197 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-utilities\") pod \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\" (UID: \"fda2fdd1-0c89-4398-8e0a-545311fe5ae9\") " Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.260062 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-utilities" (OuterVolumeSpecName: "utilities") pod "fda2fdd1-0c89-4398-8e0a-545311fe5ae9" (UID: "fda2fdd1-0c89-4398-8e0a-545311fe5ae9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.263963 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-kube-api-access-rk9t6" (OuterVolumeSpecName: "kube-api-access-rk9t6") pod "fda2fdd1-0c89-4398-8e0a-545311fe5ae9" (UID: "fda2fdd1-0c89-4398-8e0a-545311fe5ae9"). InnerVolumeSpecName "kube-api-access-rk9t6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.317601 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fda2fdd1-0c89-4398-8e0a-545311fe5ae9" (UID: "fda2fdd1-0c89-4398-8e0a-545311fe5ae9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.359972 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rk9t6\" (UniqueName: \"kubernetes.io/projected/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-kube-api-access-rk9t6\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.360029 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.360042 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fda2fdd1-0c89-4398-8e0a-545311fe5ae9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.913420 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4f5w5" event={"ID":"fda2fdd1-0c89-4398-8e0a-545311fe5ae9","Type":"ContainerDied","Data":"b4f96ff36261969d5e1744037152cb5bda934c47d381c8e575261b8ae9c7a832"} Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.913488 4712 scope.go:117] "RemoveContainer" containerID="c516827b18fc293b250a7445e45356829e42c68cec9d9c06f7b819553b51ac2d" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.913523 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4f5w5" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.936946 4712 scope.go:117] "RemoveContainer" containerID="65fb9d57bcd444f16dbba66a7afdefa1c7a37cb175c6902c8e974d2ecabb7ea7" Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.962400 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4f5w5"] Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.977596 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4f5w5"] Jan 30 17:00:54 crc kubenswrapper[4712]: I0130 17:00:54.977722 4712 scope.go:117] "RemoveContainer" containerID="7b3fa34cdb2d09333e616c13e38233606086220b5db4e12aa76b3f9d77a3c16b" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.375034 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4hp7"] Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.375413 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l4hp7" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="registry-server" containerID="cri-o://f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02" gracePeriod=2 Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.810528 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" path="/var/lib/kubelet/pods/fda2fdd1-0c89-4398-8e0a-545311fe5ae9/volumes" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.877376 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.882623 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-catalog-content\") pod \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.882699 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gt64c\" (UniqueName: \"kubernetes.io/projected/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-kube-api-access-gt64c\") pod \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.882784 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-utilities\") pod \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\" (UID: \"1efcd5ba-0391-4427-aaa0-9cef2b10a48c\") " Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.885034 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-utilities" (OuterVolumeSpecName: "utilities") pod "1efcd5ba-0391-4427-aaa0-9cef2b10a48c" (UID: "1efcd5ba-0391-4427-aaa0-9cef2b10a48c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.889002 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-kube-api-access-gt64c" (OuterVolumeSpecName: "kube-api-access-gt64c") pod "1efcd5ba-0391-4427-aaa0-9cef2b10a48c" (UID: "1efcd5ba-0391-4427-aaa0-9cef2b10a48c"). InnerVolumeSpecName "kube-api-access-gt64c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.921317 4712 generic.go:334] "Generic (PLEG): container finished" podID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerID="f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02" exitCode=0 Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.921406 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4hp7" event={"ID":"1efcd5ba-0391-4427-aaa0-9cef2b10a48c","Type":"ContainerDied","Data":"f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02"} Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.921488 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l4hp7" event={"ID":"1efcd5ba-0391-4427-aaa0-9cef2b10a48c","Type":"ContainerDied","Data":"2bf10f102e2e4d318ff9ce6a799f3bd507f16aa4b8078672b8ebb15e4152a8d5"} Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.921548 4712 scope.go:117] "RemoveContainer" containerID="f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.921443 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l4hp7" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.921883 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1efcd5ba-0391-4427-aaa0-9cef2b10a48c" (UID: "1efcd5ba-0391-4427-aaa0-9cef2b10a48c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.941447 4712 scope.go:117] "RemoveContainer" containerID="b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.958549 4712 scope.go:117] "RemoveContainer" containerID="8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.976511 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hzqrq"] Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.976971 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hzqrq" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="registry-server" containerID="cri-o://c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d" gracePeriod=2 Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.984339 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.984421 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.984441 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gt64c\" (UniqueName: \"kubernetes.io/projected/1efcd5ba-0391-4427-aaa0-9cef2b10a48c-kube-api-access-gt64c\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.996914 4712 scope.go:117] "RemoveContainer" containerID="f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02" Jan 30 17:00:55 crc kubenswrapper[4712]: E0130 17:00:55.997501 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02\": container with ID starting with f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02 not found: ID does not exist" containerID="f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.997541 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02"} err="failed to get container status \"f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02\": rpc error: code = NotFound desc = could not find container \"f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02\": container with ID starting with f547e13b56cf10155f9b1e29c215e581be33f98f830856e15ad224b81b461f02 not found: ID does not exist" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.997566 4712 scope.go:117] "RemoveContainer" containerID="b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e" Jan 30 17:00:55 crc kubenswrapper[4712]: E0130 17:00:55.998023 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e\": container with ID starting with b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e not found: ID does not exist" containerID="b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.998054 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e"} err="failed to get container status \"b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e\": rpc error: code = NotFound desc = could not find container \"b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e\": container with ID starting with b3e32a9e83ccdacb3b89467221da3a64b6fac01526af98a0fdbc9eaf5e8a7c3e not found: ID does not exist" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.998072 4712 scope.go:117] "RemoveContainer" containerID="8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25" Jan 30 17:00:55 crc kubenswrapper[4712]: E0130 17:00:55.998473 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25\": container with ID starting with 8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25 not found: ID does not exist" containerID="8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25" Jan 30 17:00:55 crc kubenswrapper[4712]: I0130 17:00:55.998504 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25"} err="failed to get container status \"8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25\": rpc error: code = NotFound desc = could not find container \"8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25\": container with ID starting with 8435c564567c06246f852bfee4bcd70e209ea6ecf17c32facdbba0db41263c25 not found: ID does not exist" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.248545 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4hp7"] Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.252335 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l4hp7"] Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.421177 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.588606 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-utilities\") pod \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.588684 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-catalog-content\") pod \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.588707 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmg4t\" (UniqueName: \"kubernetes.io/projected/fc1192c4-3b0c-4421-8e71-17e8731ffe34-kube-api-access-tmg4t\") pod \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\" (UID: \"fc1192c4-3b0c-4421-8e71-17e8731ffe34\") " Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.589339 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-utilities" (OuterVolumeSpecName: "utilities") pod "fc1192c4-3b0c-4421-8e71-17e8731ffe34" (UID: "fc1192c4-3b0c-4421-8e71-17e8731ffe34"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.594033 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc1192c4-3b0c-4421-8e71-17e8731ffe34-kube-api-access-tmg4t" (OuterVolumeSpecName: "kube-api-access-tmg4t") pod "fc1192c4-3b0c-4421-8e71-17e8731ffe34" (UID: "fc1192c4-3b0c-4421-8e71-17e8731ffe34"). InnerVolumeSpecName "kube-api-access-tmg4t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.690303 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.690349 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmg4t\" (UniqueName: \"kubernetes.io/projected/fc1192c4-3b0c-4421-8e71-17e8731ffe34-kube-api-access-tmg4t\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.734884 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fc1192c4-3b0c-4421-8e71-17e8731ffe34" (UID: "fc1192c4-3b0c-4421-8e71-17e8731ffe34"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.791591 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fc1192c4-3b0c-4421-8e71-17e8731ffe34-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.933632 4712 generic.go:334] "Generic (PLEG): container finished" podID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerID="c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d" exitCode=0 Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.933909 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerDied","Data":"c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d"} Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.934821 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hzqrq" event={"ID":"fc1192c4-3b0c-4421-8e71-17e8731ffe34","Type":"ContainerDied","Data":"9ce6f8da3abf3812c2cba3fa53e19c1e54283ad5df15a1a57eb6b66d70bb109e"} Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.934912 4712 scope.go:117] "RemoveContainer" containerID="c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.933998 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hzqrq" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.957617 4712 scope.go:117] "RemoveContainer" containerID="acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4" Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.972071 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hzqrq"] Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.976846 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hzqrq"] Jan 30 17:00:56 crc kubenswrapper[4712]: I0130 17:00:56.991742 4712 scope.go:117] "RemoveContainer" containerID="2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.012464 4712 scope.go:117] "RemoveContainer" containerID="c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d" Jan 30 17:00:57 crc kubenswrapper[4712]: E0130 17:00:57.012935 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d\": container with ID starting with c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d not found: ID does not exist" containerID="c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.012983 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d"} err="failed to get container status \"c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d\": rpc error: code = NotFound desc = could not find container \"c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d\": container with ID starting with c3b6de8405a52677f3708c4498b18088d86bb736dc15e718da0385a6e087fe6d not found: ID does not exist" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.013010 4712 scope.go:117] "RemoveContainer" containerID="acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4" Jan 30 17:00:57 crc kubenswrapper[4712]: E0130 17:00:57.013436 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4\": container with ID starting with acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4 not found: ID does not exist" containerID="acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.013530 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4"} err="failed to get container status \"acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4\": rpc error: code = NotFound desc = could not find container \"acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4\": container with ID starting with acbb8cd0158d7e4391a035ba8299e61657f10e17fc98619349834f87e7c01dc4 not found: ID does not exist" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.013833 4712 scope.go:117] "RemoveContainer" containerID="2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44" Jan 30 17:00:57 crc kubenswrapper[4712]: E0130 17:00:57.014313 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44\": container with ID starting with 2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44 not found: ID does not exist" containerID="2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.014336 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44"} err="failed to get container status \"2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44\": rpc error: code = NotFound desc = could not find container \"2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44\": container with ID starting with 2564f2d7d05bac340ab5c24c818a46144519731e6616580b790b441295620b44 not found: ID does not exist" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.808222 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" path="/var/lib/kubelet/pods/1efcd5ba-0391-4427-aaa0-9cef2b10a48c/volumes" Jan 30 17:00:57 crc kubenswrapper[4712]: I0130 17:00:57.809924 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" path="/var/lib/kubelet/pods/fc1192c4-3b0c-4421-8e71-17e8731ffe34/volumes" Jan 30 17:01:06 crc kubenswrapper[4712]: I0130 17:01:06.271138 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:01:06 crc kubenswrapper[4712]: I0130 17:01:06.272038 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.386410 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" podUID="b23672ef-c640-4ba4-9303-26955cec21d6" containerName="oauth-openshift" containerID="cri-o://6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90" gracePeriod=15 Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.842327 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877559 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-544b887855-ts8md"] Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.877810 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="extract-utilities" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877823 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="extract-utilities" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.877838 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877845 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.877854 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b23672ef-c640-4ba4-9303-26955cec21d6" containerName="oauth-openshift" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877861 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b23672ef-c640-4ba4-9303-26955cec21d6" containerName="oauth-openshift" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.877870 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="extract-utilities" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877877 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="extract-utilities" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.877886 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="extract-content" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877892 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="extract-content" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.877904 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="extract-content" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.877912 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="extract-content" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.878247 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="extract-content" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878267 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="extract-content" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.878326 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="extract-utilities" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878337 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="extract-utilities" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.878352 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878365 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: E0130 17:01:09.878375 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878382 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878492 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b23672ef-c640-4ba4-9303-26955cec21d6" containerName="oauth-openshift" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878505 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc1192c4-3b0c-4421-8e71-17e8731ffe34" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878512 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1efcd5ba-0391-4427-aaa0-9cef2b10a48c" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878525 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fda2fdd1-0c89-4398-8e0a-545311fe5ae9" containerName="registry-server" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.878995 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.896592 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-544b887855-ts8md"] Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.967170 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-ocp-branding-template\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.967237 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-audit-policies\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.967268 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-cliconfig\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.967342 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-session\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.968069 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.968300 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-error\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.968348 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-service-ca\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.968371 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-router-certs\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.968389 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-provider-selection\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.968915 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.969159 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.969432 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqlzw\" (UniqueName: \"kubernetes.io/projected/b23672ef-c640-4ba4-9303-26955cec21d6-kube-api-access-nqlzw\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.969600 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-idp-0-file-data\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.969746 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-login\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.969927 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-trusted-ca-bundle\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.970092 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-serving-cert\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.970119 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23672ef-c640-4ba4-9303-26955cec21d6-audit-dir\") pod \"b23672ef-c640-4ba4-9303-26955cec21d6\" (UID: \"b23672ef-c640-4ba4-9303-26955cec21d6\") " Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.970576 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.970747 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-error\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.970916 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.971083 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-session\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.971340 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/385118bd-7569-4940-89a0-ac41cf3395a2-audit-dir\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.971511 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.971680 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xww7k\" (UniqueName: \"kubernetes.io/projected/385118bd-7569-4940-89a0-ac41cf3395a2-kube-api-access-xww7k\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.972051 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b23672ef-c640-4ba4-9303-26955cec21d6-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.971975 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-audit-policies\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.972694 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.972573 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.972880 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.973126 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.973601 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.973745 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-login\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.973537 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974028 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.973951 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974172 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.973942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974510 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974607 4712 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b23672ef-c640-4ba4-9303-26955cec21d6-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974701 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974858 4712 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.974960 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.975059 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.975177 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.975271 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.975363 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.976398 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.977105 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.977617 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.977928 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:09 crc kubenswrapper[4712]: I0130 17:01:09.979971 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b23672ef-c640-4ba4-9303-26955cec21d6-kube-api-access-nqlzw" (OuterVolumeSpecName: "kube-api-access-nqlzw") pod "b23672ef-c640-4ba4-9303-26955cec21d6" (UID: "b23672ef-c640-4ba4-9303-26955cec21d6"). InnerVolumeSpecName "kube-api-access-nqlzw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.035244 4712 generic.go:334] "Generic (PLEG): container finished" podID="b23672ef-c640-4ba4-9303-26955cec21d6" containerID="6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90" exitCode=0 Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.035309 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" event={"ID":"b23672ef-c640-4ba4-9303-26955cec21d6","Type":"ContainerDied","Data":"6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90"} Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.035346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" event={"ID":"b23672ef-c640-4ba4-9303-26955cec21d6","Type":"ContainerDied","Data":"fa43ed8af52910a961c7ebcfaee77aacda9b0520113f9d8d7e59c50aa6807b2c"} Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.035368 4712 scope.go:117] "RemoveContainer" containerID="6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.035701 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-t6xlq" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.050456 4712 scope.go:117] "RemoveContainer" containerID="6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90" Jan 30 17:01:10 crc kubenswrapper[4712]: E0130 17:01:10.051379 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90\": container with ID starting with 6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90 not found: ID does not exist" containerID="6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.051421 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90"} err="failed to get container status \"6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90\": rpc error: code = NotFound desc = could not find container \"6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90\": container with ID starting with 6d39d6a3e969e7f20a78a48296a4b7f8efefe0ad698bea3d32802ba82925ea90 not found: ID does not exist" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.072302 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t6xlq"] Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.075457 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-t6xlq"] Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076020 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076093 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076123 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-error\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076138 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076298 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-session\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076675 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/385118bd-7569-4940-89a0-ac41cf3395a2-audit-dir\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076852 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-cliconfig\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076846 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/385118bd-7569-4940-89a0-ac41cf3395a2-audit-dir\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.076997 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077036 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xww7k\" (UniqueName: \"kubernetes.io/projected/385118bd-7569-4940-89a0-ac41cf3395a2-kube-api-access-xww7k\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077147 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-audit-policies\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077173 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077193 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077216 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077236 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077254 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-login\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077309 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077327 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077343 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqlzw\" (UniqueName: \"kubernetes.io/projected/b23672ef-c640-4ba4-9303-26955cec21d6-kube-api-access-nqlzw\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077358 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077371 4712 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/b23672ef-c640-4ba4-9303-26955cec21d6-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077314 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.077981 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-service-ca\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.078596 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/385118bd-7569-4940-89a0-ac41cf3395a2-audit-policies\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.080652 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-serving-cert\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.080820 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-error\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.080933 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-session\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.081656 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.081866 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-template-login\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.081970 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.085534 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.085879 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/385118bd-7569-4940-89a0-ac41cf3395a2-v4-0-config-system-router-certs\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.092746 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xww7k\" (UniqueName: \"kubernetes.io/projected/385118bd-7569-4940-89a0-ac41cf3395a2-kube-api-access-xww7k\") pod \"oauth-openshift-544b887855-ts8md\" (UID: \"385118bd-7569-4940-89a0-ac41cf3395a2\") " pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.200954 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:10 crc kubenswrapper[4712]: I0130 17:01:10.634338 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-544b887855-ts8md"] Jan 30 17:01:11 crc kubenswrapper[4712]: I0130 17:01:11.042270 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" event={"ID":"385118bd-7569-4940-89a0-ac41cf3395a2","Type":"ContainerStarted","Data":"f5df26d7b20199b29f5f02ae9072e67d9ad062c2504c3e9a63e6d35fbf439a10"} Jan 30 17:01:11 crc kubenswrapper[4712]: I0130 17:01:11.042558 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:11 crc kubenswrapper[4712]: I0130 17:01:11.042573 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" event={"ID":"385118bd-7569-4940-89a0-ac41cf3395a2","Type":"ContainerStarted","Data":"6c6be4d275c58bc78ef8921c860b2fe03801f95e8b0bbb54e2cd94ad1fff989e"} Jan 30 17:01:11 crc kubenswrapper[4712]: I0130 17:01:11.069733 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" podStartSLOduration=27.0697081 podStartE2EDuration="27.0697081s" podCreationTimestamp="2026-01-30 17:00:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:01:11.062054064 +0000 UTC m=+407.969063543" watchObservedRunningTime="2026-01-30 17:01:11.0697081 +0000 UTC m=+407.976717579" Jan 30 17:01:11 crc kubenswrapper[4712]: I0130 17:01:11.195151 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" Jan 30 17:01:11 crc kubenswrapper[4712]: I0130 17:01:11.813519 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b23672ef-c640-4ba4-9303-26955cec21d6" path="/var/lib/kubelet/pods/b23672ef-c640-4ba4-9303-26955cec21d6/volumes" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.418559 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcfwq"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.419580 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-qcfwq" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="registry-server" containerID="cri-o://407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80" gracePeriod=30 Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.433949 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlkwf"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.434194 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dlkwf" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="registry-server" containerID="cri-o://a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b" gracePeriod=30 Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.438110 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2t5z"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.438443 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" containerID="cri-o://a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50" gracePeriod=30 Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.448619 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmc9f"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.455562 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jmc9f" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="registry-server" containerID="cri-o://f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d" gracePeriod=30 Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.459955 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4mgv"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.460624 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.462260 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pz9vb"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.462475 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pz9vb" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="registry-server" containerID="cri-o://9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" gracePeriod=30 Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.487192 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4mgv"] Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.508116 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.508163 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.508215 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rw48l\" (UniqueName: \"kubernetes.io/projected/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-kube-api-access-rw48l\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.608876 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.609217 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.609260 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rw48l\" (UniqueName: \"kubernetes.io/projected/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-kube-api-access-rw48l\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.610353 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.625655 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.631392 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rw48l\" (UniqueName: \"kubernetes.io/projected/f757484a-48c2-4b6e-9a6b-1e01fe951ae5-kube-api-access-rw48l\") pod \"marketplace-operator-79b997595-k4mgv\" (UID: \"f757484a-48c2-4b6e-9a6b-1e01fe951ae5\") " pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.786143 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:26 crc kubenswrapper[4712]: E0130 17:01:26.814482 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183 is running failed: container process not found" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 17:01:26 crc kubenswrapper[4712]: E0130 17:01:26.814975 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183 is running failed: container process not found" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 17:01:26 crc kubenswrapper[4712]: E0130 17:01:26.815276 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183 is running failed: container process not found" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 17:01:26 crc kubenswrapper[4712]: E0130 17:01:26.815314 4712 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-pz9vb" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="registry-server" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.898088 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.911784 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5x8s\" (UniqueName: \"kubernetes.io/projected/be58da2a-7470-403f-a094-ca2bac2dbccd-kube-api-access-x5x8s\") pod \"be58da2a-7470-403f-a094-ca2bac2dbccd\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.911858 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-utilities\") pod \"be58da2a-7470-403f-a094-ca2bac2dbccd\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.911914 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-catalog-content\") pod \"be58da2a-7470-403f-a094-ca2bac2dbccd\" (UID: \"be58da2a-7470-403f-a094-ca2bac2dbccd\") " Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.913103 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-utilities" (OuterVolumeSpecName: "utilities") pod "be58da2a-7470-403f-a094-ca2bac2dbccd" (UID: "be58da2a-7470-403f-a094-ca2bac2dbccd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.918338 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be58da2a-7470-403f-a094-ca2bac2dbccd-kube-api-access-x5x8s" (OuterVolumeSpecName: "kube-api-access-x5x8s") pod "be58da2a-7470-403f-a094-ca2bac2dbccd" (UID: "be58da2a-7470-403f-a094-ca2bac2dbccd"). InnerVolumeSpecName "kube-api-access-x5x8s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:26 crc kubenswrapper[4712]: I0130 17:01:26.979752 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.012983 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e01529-72ef-487b-ac85-e90905240355-marketplace-trusted-ca\") pod \"c9e01529-72ef-487b-ac85-e90905240355\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.013045 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9e01529-72ef-487b-ac85-e90905240355-marketplace-operator-metrics\") pod \"c9e01529-72ef-487b-ac85-e90905240355\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.013102 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62kzl\" (UniqueName: \"kubernetes.io/projected/c9e01529-72ef-487b-ac85-e90905240355-kube-api-access-62kzl\") pod \"c9e01529-72ef-487b-ac85-e90905240355\" (UID: \"c9e01529-72ef-487b-ac85-e90905240355\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.013567 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5x8s\" (UniqueName: \"kubernetes.io/projected/be58da2a-7470-403f-a094-ca2bac2dbccd-kube-api-access-x5x8s\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.013580 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.014354 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9e01529-72ef-487b-ac85-e90905240355-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c9e01529-72ef-487b-ac85-e90905240355" (UID: "c9e01529-72ef-487b-ac85-e90905240355"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.016138 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "be58da2a-7470-403f-a094-ca2bac2dbccd" (UID: "be58da2a-7470-403f-a094-ca2bac2dbccd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.017898 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9e01529-72ef-487b-ac85-e90905240355-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c9e01529-72ef-487b-ac85-e90905240355" (UID: "c9e01529-72ef-487b-ac85-e90905240355"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.021130 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.026626 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9e01529-72ef-487b-ac85-e90905240355-kube-api-access-62kzl" (OuterVolumeSpecName: "kube-api-access-62kzl") pod "c9e01529-72ef-487b-ac85-e90905240355" (UID: "c9e01529-72ef-487b-ac85-e90905240355"). InnerVolumeSpecName "kube-api-access-62kzl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.037034 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.066815 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114020 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-utilities\") pod \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114060 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nsts\" (UniqueName: \"kubernetes.io/projected/b1773095-5051-4668-ae41-1d6c41c43a43-kube-api-access-5nsts\") pod \"b1773095-5051-4668-ae41-1d6c41c43a43\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114083 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-catalog-content\") pod \"b1773095-5051-4668-ae41-1d6c41c43a43\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114102 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-catalog-content\") pod \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114144 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-catalog-content\") pod \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114215 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-utilities\") pod \"b1773095-5051-4668-ae41-1d6c41c43a43\" (UID: \"b1773095-5051-4668-ae41-1d6c41c43a43\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114261 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ms5bj\" (UniqueName: \"kubernetes.io/projected/41581f8f-2b7b-4a20-9f3b-a28c0914b093-kube-api-access-ms5bj\") pod \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114284 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swznj\" (UniqueName: \"kubernetes.io/projected/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-kube-api-access-swznj\") pod \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\" (UID: \"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114313 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-utilities\") pod \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\" (UID: \"41581f8f-2b7b-4a20-9f3b-a28c0914b093\") " Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114545 4712 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c9e01529-72ef-487b-ac85-e90905240355-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114564 4712 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c9e01529-72ef-487b-ac85-e90905240355-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114602 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62kzl\" (UniqueName: \"kubernetes.io/projected/c9e01529-72ef-487b-ac85-e90905240355-kube-api-access-62kzl\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.114615 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/be58da2a-7470-403f-a094-ca2bac2dbccd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.115560 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-utilities" (OuterVolumeSpecName: "utilities") pod "41581f8f-2b7b-4a20-9f3b-a28c0914b093" (UID: "41581f8f-2b7b-4a20-9f3b-a28c0914b093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.116079 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-utilities" (OuterVolumeSpecName: "utilities") pod "b1773095-5051-4668-ae41-1d6c41c43a43" (UID: "b1773095-5051-4668-ae41-1d6c41c43a43"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.116535 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1773095-5051-4668-ae41-1d6c41c43a43-kube-api-access-5nsts" (OuterVolumeSpecName: "kube-api-access-5nsts") pod "b1773095-5051-4668-ae41-1d6c41c43a43" (UID: "b1773095-5051-4668-ae41-1d6c41c43a43"). InnerVolumeSpecName "kube-api-access-5nsts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.118244 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41581f8f-2b7b-4a20-9f3b-a28c0914b093-kube-api-access-ms5bj" (OuterVolumeSpecName: "kube-api-access-ms5bj") pod "41581f8f-2b7b-4a20-9f3b-a28c0914b093" (UID: "41581f8f-2b7b-4a20-9f3b-a28c0914b093"). InnerVolumeSpecName "kube-api-access-ms5bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.135992 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-kube-api-access-swznj" (OuterVolumeSpecName: "kube-api-access-swznj") pod "0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" (UID: "0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3"). InnerVolumeSpecName "kube-api-access-swznj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.136701 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-utilities" (OuterVolumeSpecName: "utilities") pod "0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" (UID: "0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.155675 4712 generic.go:334] "Generic (PLEG): container finished" podID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerID="a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b" exitCode=0 Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.155712 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlkwf" event={"ID":"be58da2a-7470-403f-a094-ca2bac2dbccd","Type":"ContainerDied","Data":"a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.155755 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dlkwf" event={"ID":"be58da2a-7470-403f-a094-ca2bac2dbccd","Type":"ContainerDied","Data":"0515a6a8677c10d8232565e8b28a7293a456246298199d83ac9da1863e872115"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.155778 4712 scope.go:117] "RemoveContainer" containerID="a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.155864 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dlkwf" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.158909 4712 generic.go:334] "Generic (PLEG): container finished" podID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerID="407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80" exitCode=0 Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.158956 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcfwq" event={"ID":"41581f8f-2b7b-4a20-9f3b-a28c0914b093","Type":"ContainerDied","Data":"407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.158972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcfwq" event={"ID":"41581f8f-2b7b-4a20-9f3b-a28c0914b093","Type":"ContainerDied","Data":"d1c080093a9151abd0f023c056e6ccd843867a5d01a84b910afad2ac0302aa9b"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.159022 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcfwq" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.161637 4712 generic.go:334] "Generic (PLEG): container finished" podID="c9e01529-72ef-487b-ac85-e90905240355" containerID="a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50" exitCode=0 Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.161684 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" event={"ID":"c9e01529-72ef-487b-ac85-e90905240355","Type":"ContainerDied","Data":"a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.161703 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" event={"ID":"c9e01529-72ef-487b-ac85-e90905240355","Type":"ContainerDied","Data":"7d83fbc8ed27c1615dd107e5c67678f7e0f68d852ec17bc17af351848200b3ed"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.161747 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-v2t5z" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.171211 4712 generic.go:334] "Generic (PLEG): container finished" podID="b1773095-5051-4668-ae41-1d6c41c43a43" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" exitCode=0 Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.171291 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pz9vb" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.171384 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerDied","Data":"9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.171526 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pz9vb" event={"ID":"b1773095-5051-4668-ae41-1d6c41c43a43","Type":"ContainerDied","Data":"918b2382987759542552b98e65cbb9d1a69f240f857ed6e1e5539ef2d1cd4d60"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.174935 4712 generic.go:334] "Generic (PLEG): container finished" podID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerID="f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d" exitCode=0 Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.174972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerDied","Data":"f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.174996 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jmc9f" event={"ID":"0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3","Type":"ContainerDied","Data":"d9f4391c7c62d83081571bba9de5f2dcd5bbd6a5f62f5738232b16cb833f7983"} Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.175059 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jmc9f" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.175426 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" (UID: "0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.182375 4712 scope.go:117] "RemoveContainer" containerID="cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.204646 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41581f8f-2b7b-4a20-9f3b-a28c0914b093" (UID: "41581f8f-2b7b-4a20-9f3b-a28c0914b093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.206678 4712 scope.go:117] "RemoveContainer" containerID="f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.208372 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2t5z"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215113 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215270 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215339 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ms5bj\" (UniqueName: \"kubernetes.io/projected/41581f8f-2b7b-4a20-9f3b-a28c0914b093-kube-api-access-ms5bj\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215650 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-swznj\" (UniqueName: \"kubernetes.io/projected/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-kube-api-access-swznj\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215723 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215786 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215869 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nsts\" (UniqueName: \"kubernetes.io/projected/b1773095-5051-4668-ae41-1d6c41c43a43-kube-api-access-5nsts\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215935 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41581f8f-2b7b-4a20-9f3b-a28c0914b093-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.215305 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-v2t5z"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.229431 4712 scope.go:117] "RemoveContainer" containerID="a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.230162 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b\": container with ID starting with a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b not found: ID does not exist" containerID="a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.230204 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b"} err="failed to get container status \"a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b\": rpc error: code = NotFound desc = could not find container \"a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b\": container with ID starting with a4c5509f14aabecabcfd6aa93012f8d0d83e2a14c0b5bb64ee439354cec44f7b not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.230230 4712 scope.go:117] "RemoveContainer" containerID="cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.230563 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b\": container with ID starting with cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b not found: ID does not exist" containerID="cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.230999 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b"} err="failed to get container status \"cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b\": rpc error: code = NotFound desc = could not find container \"cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b\": container with ID starting with cb3a7e2f867d3c6f7457ae22fdf12af88f72b9cfd0db8b65dbc2d94c811f9b5b not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.231081 4712 scope.go:117] "RemoveContainer" containerID="f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.231385 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a\": container with ID starting with f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a not found: ID does not exist" containerID="f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.231411 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a"} err="failed to get container status \"f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a\": rpc error: code = NotFound desc = could not find container \"f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a\": container with ID starting with f1f67eaa6ad0a986c90acc1001b02ae9fcab577b6dda6b8fca615bcffef1859a not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.231427 4712 scope.go:117] "RemoveContainer" containerID="407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.252711 4712 scope.go:117] "RemoveContainer" containerID="065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.257158 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dlkwf"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.260662 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dlkwf"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.264729 4712 scope.go:117] "RemoveContainer" containerID="7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.277819 4712 scope.go:117] "RemoveContainer" containerID="407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.278181 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80\": container with ID starting with 407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80 not found: ID does not exist" containerID="407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.278227 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80"} err="failed to get container status \"407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80\": rpc error: code = NotFound desc = could not find container \"407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80\": container with ID starting with 407828f09cdf9f94d0974d2b1f4377deab2028294a4b23dad9b0370c1832cd80 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.278263 4712 scope.go:117] "RemoveContainer" containerID="065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.278686 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853\": container with ID starting with 065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853 not found: ID does not exist" containerID="065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.278712 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853"} err="failed to get container status \"065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853\": rpc error: code = NotFound desc = could not find container \"065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853\": container with ID starting with 065348e4159f1b0c991ac4fc57e593586da10f0f4a2d6fcef9ca3776c4d0f853 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.278735 4712 scope.go:117] "RemoveContainer" containerID="7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.278974 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6\": container with ID starting with 7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6 not found: ID does not exist" containerID="7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.278992 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6"} err="failed to get container status \"7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6\": rpc error: code = NotFound desc = could not find container \"7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6\": container with ID starting with 7404f661b8c8eaa3259e5b573d346fe189bea86469581c4b58546c78459934e6 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.279029 4712 scope.go:117] "RemoveContainer" containerID="a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.293099 4712 scope.go:117] "RemoveContainer" containerID="2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.305717 4712 scope.go:117] "RemoveContainer" containerID="a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.306603 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50\": container with ID starting with a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50 not found: ID does not exist" containerID="a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.306642 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50"} err="failed to get container status \"a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50\": rpc error: code = NotFound desc = could not find container \"a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50\": container with ID starting with a46f7acf8677c1283ade8810247067a5d6e79878471006f3f0e54b58a591cc50 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.306666 4712 scope.go:117] "RemoveContainer" containerID="2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.307486 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d\": container with ID starting with 2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d not found: ID does not exist" containerID="2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.307712 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d"} err="failed to get container status \"2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d\": rpc error: code = NotFound desc = could not find container \"2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d\": container with ID starting with 2a2bd34f12cd978dc1ac6c6ed2d453d30a8e9b069efc0b279bf1d2e70cc0247d not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.307883 4712 scope.go:117] "RemoveContainer" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.310449 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1773095-5051-4668-ae41-1d6c41c43a43" (UID: "b1773095-5051-4668-ae41-1d6c41c43a43"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.316674 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1773095-5051-4668-ae41-1d6c41c43a43-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.320473 4712 scope.go:117] "RemoveContainer" containerID="e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.338425 4712 scope.go:117] "RemoveContainer" containerID="8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.351615 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-k4mgv"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.360822 4712 scope.go:117] "RemoveContainer" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.361210 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183\": container with ID starting with 9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183 not found: ID does not exist" containerID="9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.361270 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183"} err="failed to get container status \"9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183\": rpc error: code = NotFound desc = could not find container \"9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183\": container with ID starting with 9a282f05117053f8c5ef035c19561f2966ec168d1b959f686fd46ddd1c945183 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.361297 4712 scope.go:117] "RemoveContainer" containerID="e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.361695 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7\": container with ID starting with e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7 not found: ID does not exist" containerID="e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.361726 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7"} err="failed to get container status \"e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7\": rpc error: code = NotFound desc = could not find container \"e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7\": container with ID starting with e085799df15886fb0653f05d22b19f1b410633ee63b8dcb426be7310a94c59e7 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.361751 4712 scope.go:117] "RemoveContainer" containerID="8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.362109 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0\": container with ID starting with 8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0 not found: ID does not exist" containerID="8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.362133 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0"} err="failed to get container status \"8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0\": rpc error: code = NotFound desc = could not find container \"8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0\": container with ID starting with 8850ad1276b1be2e08c572e79a49d6209b3a99c9567c3557661bb4418a7ce8c0 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.362149 4712 scope.go:117] "RemoveContainer" containerID="f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d" Jan 30 17:01:27 crc kubenswrapper[4712]: W0130 17:01:27.364855 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf757484a_48c2_4b6e_9a6b_1e01fe951ae5.slice/crio-2f852292ddc8c653024dbf668656cc30b6d00aa639c463b2b031d6d930d07742 WatchSource:0}: Error finding container 2f852292ddc8c653024dbf668656cc30b6d00aa639c463b2b031d6d930d07742: Status 404 returned error can't find the container with id 2f852292ddc8c653024dbf668656cc30b6d00aa639c463b2b031d6d930d07742 Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.374116 4712 scope.go:117] "RemoveContainer" containerID="18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.393837 4712 scope.go:117] "RemoveContainer" containerID="c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.406977 4712 scope.go:117] "RemoveContainer" containerID="f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.408071 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d\": container with ID starting with f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d not found: ID does not exist" containerID="f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.408106 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d"} err="failed to get container status \"f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d\": rpc error: code = NotFound desc = could not find container \"f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d\": container with ID starting with f5081164073ba573f2cd9e2593232cf760a0699b3ed2bcf27e1f4f8d59b22d3d not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.408134 4712 scope.go:117] "RemoveContainer" containerID="18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.408387 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87\": container with ID starting with 18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87 not found: ID does not exist" containerID="18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.408450 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87"} err="failed to get container status \"18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87\": rpc error: code = NotFound desc = could not find container \"18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87\": container with ID starting with 18cf1d50fac095bdbb05ffe8e671602be9456c39a8a24a86eb38829986319e87 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.408476 4712 scope.go:117] "RemoveContainer" containerID="c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2" Jan 30 17:01:27 crc kubenswrapper[4712]: E0130 17:01:27.408708 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2\": container with ID starting with c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2 not found: ID does not exist" containerID="c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.408782 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2"} err="failed to get container status \"c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2\": rpc error: code = NotFound desc = could not find container \"c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2\": container with ID starting with c9fdba01edebcb279eb1ea8c7f3733a958a8b9e66f8f606e4dfa836e0695f6b2 not found: ID does not exist" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.488171 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-qcfwq"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.498824 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-qcfwq"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.508558 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmc9f"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.512823 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jmc9f"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.532745 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pz9vb"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.532824 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pz9vb"] Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.805954 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" path="/var/lib/kubelet/pods/0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3/volumes" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.806545 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" path="/var/lib/kubelet/pods/41581f8f-2b7b-4a20-9f3b-a28c0914b093/volumes" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.807160 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" path="/var/lib/kubelet/pods/b1773095-5051-4668-ae41-1d6c41c43a43/volumes" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.808188 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" path="/var/lib/kubelet/pods/be58da2a-7470-403f-a094-ca2bac2dbccd/volumes" Jan 30 17:01:27 crc kubenswrapper[4712]: I0130 17:01:27.808874 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9e01529-72ef-487b-ac85-e90905240355" path="/var/lib/kubelet/pods/c9e01529-72ef-487b-ac85-e90905240355/volumes" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.184519 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" event={"ID":"f757484a-48c2-4b6e-9a6b-1e01fe951ae5","Type":"ContainerStarted","Data":"482cb071017dbe649c256712df62fd07cd771647136f39b5bb50893927b48ca2"} Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.185210 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.185374 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" event={"ID":"f757484a-48c2-4b6e-9a6b-1e01fe951ae5","Type":"ContainerStarted","Data":"2f852292ddc8c653024dbf668656cc30b6d00aa639c463b2b031d6d930d07742"} Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.192007 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.203460 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podStartSLOduration=2.203441523 podStartE2EDuration="2.203441523s" podCreationTimestamp="2026-01-30 17:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:01:28.200291082 +0000 UTC m=+425.107300561" watchObservedRunningTime="2026-01-30 17:01:28.203441523 +0000 UTC m=+425.110450992" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.636229 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-dnfsb"] Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.637392 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.637479 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.637566 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.637634 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.637691 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.637751 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.637830 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.637888 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.637943 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638010 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638069 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638122 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638177 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638238 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638309 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638389 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638456 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638509 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638570 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638624 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638679 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638740 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="extract-utilities" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638816 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638874 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.638930 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.638983 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" Jan 30 17:01:28 crc kubenswrapper[4712]: E0130 17:01:28.639039 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.639240 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="extract-content" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.639417 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.641353 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="be58da2a-7470-403f-a094-ca2bac2dbccd" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.641469 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ab4e5ea-f2f5-4d9a-9288-a5c7e63412c3" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.641581 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1773095-5051-4668-ae41-1d6c41c43a43" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.641673 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="41581f8f-2b7b-4a20-9f3b-a28c0914b093" containerName="registry-server" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.641918 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9e01529-72ef-487b-ac85-e90905240355" containerName="marketplace-operator" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.642525 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.645042 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnfsb"] Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.645193 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.836762 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fe1585c-9bff-482c-a2b9-ccbb10a11300-catalog-content\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.837162 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fe1585c-9bff-482c-a2b9-ccbb10a11300-utilities\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.837297 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85crn\" (UniqueName: \"kubernetes.io/projected/7fe1585c-9bff-482c-a2b9-ccbb10a11300-kube-api-access-85crn\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.836820 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pdgsh"] Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.838385 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.843589 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.860128 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdgsh"] Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.938596 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fe1585c-9bff-482c-a2b9-ccbb10a11300-utilities\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.938675 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85crn\" (UniqueName: \"kubernetes.io/projected/7fe1585c-9bff-482c-a2b9-ccbb10a11300-kube-api-access-85crn\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.938745 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-catalog-content\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.938820 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snftf\" (UniqueName: \"kubernetes.io/projected/150a284f-86ca-495d-ad65-096b9213b93a-kube-api-access-snftf\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.938845 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-utilities\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.938871 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fe1585c-9bff-482c-a2b9-ccbb10a11300-catalog-content\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.939187 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7fe1585c-9bff-482c-a2b9-ccbb10a11300-utilities\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.939399 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7fe1585c-9bff-482c-a2b9-ccbb10a11300-catalog-content\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.959099 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85crn\" (UniqueName: \"kubernetes.io/projected/7fe1585c-9bff-482c-a2b9-ccbb10a11300-kube-api-access-85crn\") pod \"redhat-marketplace-dnfsb\" (UID: \"7fe1585c-9bff-482c-a2b9-ccbb10a11300\") " pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:28 crc kubenswrapper[4712]: I0130 17:01:28.964732 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.039712 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snftf\" (UniqueName: \"kubernetes.io/projected/150a284f-86ca-495d-ad65-096b9213b93a-kube-api-access-snftf\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.040034 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-utilities\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.040099 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-catalog-content\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.040616 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-catalog-content\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.040662 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-utilities\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.062753 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snftf\" (UniqueName: \"kubernetes.io/projected/150a284f-86ca-495d-ad65-096b9213b93a-kube-api-access-snftf\") pod \"certified-operators-pdgsh\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.157344 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.375048 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-dnfsb"] Jan 30 17:01:29 crc kubenswrapper[4712]: W0130 17:01:29.380037 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7fe1585c_9bff_482c_a2b9_ccbb10a11300.slice/crio-ee11712992b127ac809141ab3e298a2d4168556dee6d1dcc9f2b530bf5e32503 WatchSource:0}: Error finding container ee11712992b127ac809141ab3e298a2d4168556dee6d1dcc9f2b530bf5e32503: Status 404 returned error can't find the container with id ee11712992b127ac809141ab3e298a2d4168556dee6d1dcc9f2b530bf5e32503 Jan 30 17:01:29 crc kubenswrapper[4712]: I0130 17:01:29.569013 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pdgsh"] Jan 30 17:01:29 crc kubenswrapper[4712]: W0130 17:01:29.578864 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod150a284f_86ca_495d_ad65_096b9213b93a.slice/crio-7a59f362253d04578a82c4395546aa95c458ec796b769ca9a58bae0b099cfdb9 WatchSource:0}: Error finding container 7a59f362253d04578a82c4395546aa95c458ec796b769ca9a58bae0b099cfdb9: Status 404 returned error can't find the container with id 7a59f362253d04578a82c4395546aa95c458ec796b769ca9a58bae0b099cfdb9 Jan 30 17:01:30 crc kubenswrapper[4712]: I0130 17:01:30.199414 4712 generic.go:334] "Generic (PLEG): container finished" podID="7fe1585c-9bff-482c-a2b9-ccbb10a11300" containerID="0ba1629d851f9c94f56453efc3e3bd7d59a2dc81d9ccb5ae0d9aaa9f9e30a66a" exitCode=0 Jan 30 17:01:30 crc kubenswrapper[4712]: I0130 17:01:30.199483 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnfsb" event={"ID":"7fe1585c-9bff-482c-a2b9-ccbb10a11300","Type":"ContainerDied","Data":"0ba1629d851f9c94f56453efc3e3bd7d59a2dc81d9ccb5ae0d9aaa9f9e30a66a"} Jan 30 17:01:30 crc kubenswrapper[4712]: I0130 17:01:30.199513 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnfsb" event={"ID":"7fe1585c-9bff-482c-a2b9-ccbb10a11300","Type":"ContainerStarted","Data":"ee11712992b127ac809141ab3e298a2d4168556dee6d1dcc9f2b530bf5e32503"} Jan 30 17:01:30 crc kubenswrapper[4712]: I0130 17:01:30.204165 4712 generic.go:334] "Generic (PLEG): container finished" podID="150a284f-86ca-495d-ad65-096b9213b93a" containerID="cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363" exitCode=0 Jan 30 17:01:30 crc kubenswrapper[4712]: I0130 17:01:30.205036 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdgsh" event={"ID":"150a284f-86ca-495d-ad65-096b9213b93a","Type":"ContainerDied","Data":"cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363"} Jan 30 17:01:30 crc kubenswrapper[4712]: I0130 17:01:30.205072 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdgsh" event={"ID":"150a284f-86ca-495d-ad65-096b9213b93a","Type":"ContainerStarted","Data":"7a59f362253d04578a82c4395546aa95c458ec796b769ca9a58bae0b099cfdb9"} Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.039650 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kpb2d"] Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.040780 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.046446 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.054565 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kpb2d"] Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.168973 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-catalog-content\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.169227 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qvfhz\" (UniqueName: \"kubernetes.io/projected/05eaea30-d33b-4173-a1a3-d5a52ea53da9-kube-api-access-qvfhz\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.169346 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-utilities\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.210296 4712 generic.go:334] "Generic (PLEG): container finished" podID="7fe1585c-9bff-482c-a2b9-ccbb10a11300" containerID="c10042c410a108a8f864fe94f0be7bef3edd33cc7d93ae5bf548cf030318e444" exitCode=0 Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.210477 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnfsb" event={"ID":"7fe1585c-9bff-482c-a2b9-ccbb10a11300","Type":"ContainerDied","Data":"c10042c410a108a8f864fe94f0be7bef3edd33cc7d93ae5bf548cf030318e444"} Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.235137 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fp9sk"] Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.236172 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.237556 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.250982 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fp9sk"] Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.269906 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240ba5c6-eb36-4da8-913a-f2b61d13293b-catalog-content\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.270008 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh44d\" (UniqueName: \"kubernetes.io/projected/240ba5c6-eb36-4da8-913a-f2b61d13293b-kube-api-access-gh44d\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.270053 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-catalog-content\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.270101 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qvfhz\" (UniqueName: \"kubernetes.io/projected/05eaea30-d33b-4173-a1a3-d5a52ea53da9-kube-api-access-qvfhz\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.270138 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-utilities\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.270164 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240ba5c6-eb36-4da8-913a-f2b61d13293b-utilities\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.270627 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-catalog-content\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.271564 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-utilities\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.291819 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qvfhz\" (UniqueName: \"kubernetes.io/projected/05eaea30-d33b-4173-a1a3-d5a52ea53da9-kube-api-access-qvfhz\") pod \"redhat-operators-kpb2d\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.362408 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.371119 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240ba5c6-eb36-4da8-913a-f2b61d13293b-catalog-content\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.371210 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh44d\" (UniqueName: \"kubernetes.io/projected/240ba5c6-eb36-4da8-913a-f2b61d13293b-kube-api-access-gh44d\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.371276 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240ba5c6-eb36-4da8-913a-f2b61d13293b-utilities\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.371657 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/240ba5c6-eb36-4da8-913a-f2b61d13293b-catalog-content\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.371697 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/240ba5c6-eb36-4da8-913a-f2b61d13293b-utilities\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.390816 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh44d\" (UniqueName: \"kubernetes.io/projected/240ba5c6-eb36-4da8-913a-f2b61d13293b-kube-api-access-gh44d\") pod \"community-operators-fp9sk\" (UID: \"240ba5c6-eb36-4da8-913a-f2b61d13293b\") " pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.549924 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.763304 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fp9sk"] Jan 30 17:01:31 crc kubenswrapper[4712]: W0130 17:01:31.775289 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod240ba5c6_eb36_4da8_913a_f2b61d13293b.slice/crio-3d952a879b60ebe0d5b4698800cd6890ddb9ce8d5e4ec8060febe11cd90f170d WatchSource:0}: Error finding container 3d952a879b60ebe0d5b4698800cd6890ddb9ce8d5e4ec8060febe11cd90f170d: Status 404 returned error can't find the container with id 3d952a879b60ebe0d5b4698800cd6890ddb9ce8d5e4ec8060febe11cd90f170d Jan 30 17:01:31 crc kubenswrapper[4712]: I0130 17:01:31.779057 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kpb2d"] Jan 30 17:01:31 crc kubenswrapper[4712]: W0130 17:01:31.792952 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05eaea30_d33b_4173_a1a3_d5a52ea53da9.slice/crio-4710c4bbe76442ca042202827524611bd292f4dda6918f2090fc85001f4e0d0c WatchSource:0}: Error finding container 4710c4bbe76442ca042202827524611bd292f4dda6918f2090fc85001f4e0d0c: Status 404 returned error can't find the container with id 4710c4bbe76442ca042202827524611bd292f4dda6918f2090fc85001f4e0d0c Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.224464 4712 generic.go:334] "Generic (PLEG): container finished" podID="240ba5c6-eb36-4da8-913a-f2b61d13293b" containerID="76fba079cca0dc332331e6aca88c19d7d98c98fa96fb33454bf8fc730b02a98c" exitCode=0 Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.224867 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp9sk" event={"ID":"240ba5c6-eb36-4da8-913a-f2b61d13293b","Type":"ContainerDied","Data":"76fba079cca0dc332331e6aca88c19d7d98c98fa96fb33454bf8fc730b02a98c"} Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.224898 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp9sk" event={"ID":"240ba5c6-eb36-4da8-913a-f2b61d13293b","Type":"ContainerStarted","Data":"3d952a879b60ebe0d5b4698800cd6890ddb9ce8d5e4ec8060febe11cd90f170d"} Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.228931 4712 generic.go:334] "Generic (PLEG): container finished" podID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerID="e1870d98edf84c1ea3ea3a1a8ae3e5ac81764991a56f98a2735934d679b914b2" exitCode=0 Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.229027 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpb2d" event={"ID":"05eaea30-d33b-4173-a1a3-d5a52ea53da9","Type":"ContainerDied","Data":"e1870d98edf84c1ea3ea3a1a8ae3e5ac81764991a56f98a2735934d679b914b2"} Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.229053 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpb2d" event={"ID":"05eaea30-d33b-4173-a1a3-d5a52ea53da9","Type":"ContainerStarted","Data":"4710c4bbe76442ca042202827524611bd292f4dda6918f2090fc85001f4e0d0c"} Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.231877 4712 generic.go:334] "Generic (PLEG): container finished" podID="150a284f-86ca-495d-ad65-096b9213b93a" containerID="8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851" exitCode=0 Jan 30 17:01:32 crc kubenswrapper[4712]: I0130 17:01:32.231920 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdgsh" event={"ID":"150a284f-86ca-495d-ad65-096b9213b93a","Type":"ContainerDied","Data":"8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851"} Jan 30 17:01:33 crc kubenswrapper[4712]: I0130 17:01:33.238450 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-dnfsb" event={"ID":"7fe1585c-9bff-482c-a2b9-ccbb10a11300","Type":"ContainerStarted","Data":"ef0d334979c1605ffd6f9002a8eacc9b54efe3b702f4e8307fe62425cba5bc84"} Jan 30 17:01:33 crc kubenswrapper[4712]: I0130 17:01:33.257658 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-dnfsb" podStartSLOduration=2.5018566509999998 podStartE2EDuration="5.257638101s" podCreationTimestamp="2026-01-30 17:01:28 +0000 UTC" firstStartedPulling="2026-01-30 17:01:30.201474741 +0000 UTC m=+427.108484210" lastFinishedPulling="2026-01-30 17:01:32.957256191 +0000 UTC m=+429.864265660" observedRunningTime="2026-01-30 17:01:33.256947024 +0000 UTC m=+430.163956503" watchObservedRunningTime="2026-01-30 17:01:33.257638101 +0000 UTC m=+430.164647570" Jan 30 17:01:34 crc kubenswrapper[4712]: I0130 17:01:34.249276 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdgsh" event={"ID":"150a284f-86ca-495d-ad65-096b9213b93a","Type":"ContainerStarted","Data":"9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1"} Jan 30 17:01:34 crc kubenswrapper[4712]: I0130 17:01:34.258175 4712 generic.go:334] "Generic (PLEG): container finished" podID="240ba5c6-eb36-4da8-913a-f2b61d13293b" containerID="2495f089ab6bad9f29fdcd074f1decf7e803bd38a1188426dd20671187a92bf4" exitCode=0 Jan 30 17:01:34 crc kubenswrapper[4712]: I0130 17:01:34.258252 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp9sk" event={"ID":"240ba5c6-eb36-4da8-913a-f2b61d13293b","Type":"ContainerDied","Data":"2495f089ab6bad9f29fdcd074f1decf7e803bd38a1188426dd20671187a92bf4"} Jan 30 17:01:34 crc kubenswrapper[4712]: I0130 17:01:34.261571 4712 generic.go:334] "Generic (PLEG): container finished" podID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerID="abc9c6b4c407bf05d8c0b7e048a4566e1b6f934ebdfc5684e76bda1ffbbbb53a" exitCode=0 Jan 30 17:01:34 crc kubenswrapper[4712]: I0130 17:01:34.262432 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpb2d" event={"ID":"05eaea30-d33b-4173-a1a3-d5a52ea53da9","Type":"ContainerDied","Data":"abc9c6b4c407bf05d8c0b7e048a4566e1b6f934ebdfc5684e76bda1ffbbbb53a"} Jan 30 17:01:34 crc kubenswrapper[4712]: I0130 17:01:34.296817 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pdgsh" podStartSLOduration=3.482879579 podStartE2EDuration="6.296786397s" podCreationTimestamp="2026-01-30 17:01:28 +0000 UTC" firstStartedPulling="2026-01-30 17:01:30.206248663 +0000 UTC m=+427.113258132" lastFinishedPulling="2026-01-30 17:01:33.020155481 +0000 UTC m=+429.927164950" observedRunningTime="2026-01-30 17:01:34.272514276 +0000 UTC m=+431.179523735" watchObservedRunningTime="2026-01-30 17:01:34.296786397 +0000 UTC m=+431.203795866" Jan 30 17:01:35 crc kubenswrapper[4712]: I0130 17:01:35.269202 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpb2d" event={"ID":"05eaea30-d33b-4173-a1a3-d5a52ea53da9","Type":"ContainerStarted","Data":"ed50832769d7bc5e6e03993d5fe9c8d1737e3fb93172cec693284d1a3a0f6fc8"} Jan 30 17:01:35 crc kubenswrapper[4712]: I0130 17:01:35.271751 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fp9sk" event={"ID":"240ba5c6-eb36-4da8-913a-f2b61d13293b","Type":"ContainerStarted","Data":"b028db128147be261d390fa20851e6f35d28475149d5f6359ba819400047dd75"} Jan 30 17:01:35 crc kubenswrapper[4712]: I0130 17:01:35.289136 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kpb2d" podStartSLOduration=1.684448765 podStartE2EDuration="4.289115555s" podCreationTimestamp="2026-01-30 17:01:31 +0000 UTC" firstStartedPulling="2026-01-30 17:01:32.229943028 +0000 UTC m=+429.136952507" lastFinishedPulling="2026-01-30 17:01:34.834609828 +0000 UTC m=+431.741619297" observedRunningTime="2026-01-30 17:01:35.285060541 +0000 UTC m=+432.192070030" watchObservedRunningTime="2026-01-30 17:01:35.289115555 +0000 UTC m=+432.196125024" Jan 30 17:01:35 crc kubenswrapper[4712]: I0130 17:01:35.305025 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fp9sk" podStartSLOduration=1.798580667 podStartE2EDuration="4.304985171s" podCreationTimestamp="2026-01-30 17:01:31 +0000 UTC" firstStartedPulling="2026-01-30 17:01:32.227064674 +0000 UTC m=+429.134074143" lastFinishedPulling="2026-01-30 17:01:34.733469178 +0000 UTC m=+431.640478647" observedRunningTime="2026-01-30 17:01:35.300479976 +0000 UTC m=+432.207489455" watchObservedRunningTime="2026-01-30 17:01:35.304985171 +0000 UTC m=+432.211994640" Jan 30 17:01:36 crc kubenswrapper[4712]: I0130 17:01:36.271567 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:01:36 crc kubenswrapper[4712]: I0130 17:01:36.271910 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:01:36 crc kubenswrapper[4712]: I0130 17:01:36.271957 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:01:36 crc kubenswrapper[4712]: I0130 17:01:36.272473 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d7b3cacd3abb88020219dba30c60b6f2729cab9aeaf86d8f857517015ac6486b"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:01:36 crc kubenswrapper[4712]: I0130 17:01:36.272532 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://d7b3cacd3abb88020219dba30c60b6f2729cab9aeaf86d8f857517015ac6486b" gracePeriod=600 Jan 30 17:01:37 crc kubenswrapper[4712]: I0130 17:01:37.286828 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="d7b3cacd3abb88020219dba30c60b6f2729cab9aeaf86d8f857517015ac6486b" exitCode=0 Jan 30 17:01:37 crc kubenswrapper[4712]: I0130 17:01:37.287026 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"d7b3cacd3abb88020219dba30c60b6f2729cab9aeaf86d8f857517015ac6486b"} Jan 30 17:01:37 crc kubenswrapper[4712]: I0130 17:01:37.287150 4712 scope.go:117] "RemoveContainer" containerID="08eedfacb6117293825bc73e3f6062abf53c687dc485a4ed6ec1b46c324424b5" Jan 30 17:01:38 crc kubenswrapper[4712]: I0130 17:01:38.294997 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"203c45c35096b1ae0165edae10567c9ba80cfb23bd72c48e4423c2b2e84eb646"} Jan 30 17:01:38 crc kubenswrapper[4712]: I0130 17:01:38.965419 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:38 crc kubenswrapper[4712]: I0130 17:01:38.965757 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:39 crc kubenswrapper[4712]: I0130 17:01:39.024023 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:39 crc kubenswrapper[4712]: I0130 17:01:39.157843 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:39 crc kubenswrapper[4712]: I0130 17:01:39.157901 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:39 crc kubenswrapper[4712]: I0130 17:01:39.198649 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:39 crc kubenswrapper[4712]: I0130 17:01:39.337341 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:01:39 crc kubenswrapper[4712]: I0130 17:01:39.396020 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-dnfsb" Jan 30 17:01:41 crc kubenswrapper[4712]: I0130 17:01:41.362904 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:41 crc kubenswrapper[4712]: I0130 17:01:41.363226 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:41 crc kubenswrapper[4712]: I0130 17:01:41.400355 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:41 crc kubenswrapper[4712]: I0130 17:01:41.550160 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:41 crc kubenswrapper[4712]: I0130 17:01:41.551623 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:41 crc kubenswrapper[4712]: I0130 17:01:41.616992 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:01:42 crc kubenswrapper[4712]: I0130 17:01:42.354726 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:01:42 crc kubenswrapper[4712]: I0130 17:01:42.365587 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fp9sk" Jan 30 17:04:06 crc kubenswrapper[4712]: I0130 17:04:06.270994 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:04:06 crc kubenswrapper[4712]: I0130 17:04:06.271727 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:04:36 crc kubenswrapper[4712]: I0130 17:04:36.271093 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:04:36 crc kubenswrapper[4712]: I0130 17:04:36.272026 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.271166 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.271830 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.271881 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.272450 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"203c45c35096b1ae0165edae10567c9ba80cfb23bd72c48e4423c2b2e84eb646"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.272509 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://203c45c35096b1ae0165edae10567c9ba80cfb23bd72c48e4423c2b2e84eb646" gracePeriod=600 Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.503519 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="203c45c35096b1ae0165edae10567c9ba80cfb23bd72c48e4423c2b2e84eb646" exitCode=0 Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.503559 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"203c45c35096b1ae0165edae10567c9ba80cfb23bd72c48e4423c2b2e84eb646"} Jan 30 17:05:06 crc kubenswrapper[4712]: I0130 17:05:06.503592 4712 scope.go:117] "RemoveContainer" containerID="d7b3cacd3abb88020219dba30c60b6f2729cab9aeaf86d8f857517015ac6486b" Jan 30 17:05:07 crc kubenswrapper[4712]: I0130 17:05:07.510281 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"2ccb7e72de28daa8c77382ed0d9f3fcdc643489cf9e4bd09a65cf85b38be2156"} Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.745875 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fszw7"] Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.747026 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.760162 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fszw7"] Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898511 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eeb21027-903d-4899-8a05-7c9086a5c95e-trusted-ca\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898562 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-bound-sa-token\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898593 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898613 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-registry-tls\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898659 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqcks\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-kube-api-access-vqcks\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898784 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/eeb21027-903d-4899-8a05-7c9086a5c95e-registry-certificates\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898918 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/eeb21027-903d-4899-8a05-7c9086a5c95e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.898942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/eeb21027-903d-4899-8a05-7c9086a5c95e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:39 crc kubenswrapper[4712]: I0130 17:06:39.918150 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:39.999917 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/eeb21027-903d-4899-8a05-7c9086a5c95e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:39.999958 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/eeb21027-903d-4899-8a05-7c9086a5c95e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:39.999980 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eeb21027-903d-4899-8a05-7c9086a5c95e-trusted-ca\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.000002 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-bound-sa-token\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.000138 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-registry-tls\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.001084 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/eeb21027-903d-4899-8a05-7c9086a5c95e-ca-trust-extracted\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.001200 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/eeb21027-903d-4899-8a05-7c9086a5c95e-trusted-ca\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.001237 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqcks\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-kube-api-access-vqcks\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.001489 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/eeb21027-903d-4899-8a05-7c9086a5c95e-registry-certificates\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.002331 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/eeb21027-903d-4899-8a05-7c9086a5c95e-registry-certificates\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.006623 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/eeb21027-903d-4899-8a05-7c9086a5c95e-installation-pull-secrets\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.006695 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-registry-tls\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.018316 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-bound-sa-token\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.022521 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqcks\" (UniqueName: \"kubernetes.io/projected/eeb21027-903d-4899-8a05-7c9086a5c95e-kube-api-access-vqcks\") pod \"image-registry-66df7c8f76-fszw7\" (UID: \"eeb21027-903d-4899-8a05-7c9086a5c95e\") " pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.062203 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.494400 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-fszw7"] Jan 30 17:06:40 crc kubenswrapper[4712]: I0130 17:06:40.515957 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" event={"ID":"eeb21027-903d-4899-8a05-7c9086a5c95e","Type":"ContainerStarted","Data":"2aa8aa8c02f1129485826818a69332fe12a24a46dea9dc4610f986c3d123ccb6"} Jan 30 17:06:41 crc kubenswrapper[4712]: I0130 17:06:41.527210 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" event={"ID":"eeb21027-903d-4899-8a05-7c9086a5c95e","Type":"ContainerStarted","Data":"666a05ed2e2b39340aa8b91d3aefd21acfbabcd9ee1735055342a0af83a8d986"} Jan 30 17:06:41 crc kubenswrapper[4712]: I0130 17:06:41.527438 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:06:41 crc kubenswrapper[4712]: I0130 17:06:41.547477 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" podStartSLOduration=2.547455796 podStartE2EDuration="2.547455796s" podCreationTimestamp="2026-01-30 17:06:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:06:41.544133766 +0000 UTC m=+738.451143255" watchObservedRunningTime="2026-01-30 17:06:41.547455796 +0000 UTC m=+738.454465265" Jan 30 17:07:00 crc kubenswrapper[4712]: I0130 17:07:00.068756 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-fszw7" Jan 30 17:07:00 crc kubenswrapper[4712]: I0130 17:07:00.121176 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddc2j"] Jan 30 17:07:01 crc kubenswrapper[4712]: I0130 17:07:01.812316 4712 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 30 17:07:06 crc kubenswrapper[4712]: I0130 17:07:06.270540 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:07:06 crc kubenswrapper[4712]: I0130 17:07:06.270894 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.183871 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" podUID="42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" containerName="registry" containerID="cri-o://57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878" gracePeriod=30 Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.529475 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640150 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-certificates\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640200 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-bound-sa-token\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640225 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-tls\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640265 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-ca-trust-extracted\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640322 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv8l5\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-kube-api-access-gv8l5\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640350 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-trusted-ca\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640443 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.640471 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-installation-pull-secrets\") pod \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\" (UID: \"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5\") " Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.641370 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.641396 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.645994 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.646188 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.646301 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.650694 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-kube-api-access-gv8l5" (OuterVolumeSpecName: "kube-api-access-gv8l5") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "kube-api-access-gv8l5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.655316 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.670936 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" (UID: "42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741756 4712 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741835 4712 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741856 4712 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741874 4712 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741892 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv8l5\" (UniqueName: \"kubernetes.io/projected/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-kube-api-access-gv8l5\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741908 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.741924 4712 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.779521 4712 generic.go:334] "Generic (PLEG): container finished" podID="42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" containerID="57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878" exitCode=0 Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.779566 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" event={"ID":"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5","Type":"ContainerDied","Data":"57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878"} Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.779594 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" event={"ID":"42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5","Type":"ContainerDied","Data":"7e1a135128dcf0fed21bf5a5482d5b3bc720860f1e68eab1a0ac119e14adbf7e"} Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.779613 4712 scope.go:117] "RemoveContainer" containerID="57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.780820 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-ddc2j" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.807045 4712 scope.go:117] "RemoveContainer" containerID="57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878" Jan 30 17:07:25 crc kubenswrapper[4712]: E0130 17:07:25.809840 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878\": container with ID starting with 57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878 not found: ID does not exist" containerID="57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.809962 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878"} err="failed to get container status \"57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878\": rpc error: code = NotFound desc = could not find container \"57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878\": container with ID starting with 57dd66fd83a95962c131854f70a33f2e87c4c82d7d16377478aca51f6a2a0878 not found: ID does not exist" Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.847984 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddc2j"] Jan 30 17:07:25 crc kubenswrapper[4712]: I0130 17:07:25.858421 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-ddc2j"] Jan 30 17:07:27 crc kubenswrapper[4712]: I0130 17:07:27.808238 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" path="/var/lib/kubelet/pods/42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5/volumes" Jan 30 17:07:36 crc kubenswrapper[4712]: I0130 17:07:36.271052 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:07:36 crc kubenswrapper[4712]: I0130 17:07:36.271533 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.821067 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-z55v5"] Jan 30 17:07:39 crc kubenswrapper[4712]: E0130 17:07:39.821623 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" containerName="registry" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.821636 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" containerName="registry" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.821738 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="42e31bd2-5a3c-4c3b-83bf-8e85b9a0f3b5" containerName="registry" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.822199 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z55v5" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.824104 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9887h"] Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.824979 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.832316 4712 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-vwdr8" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.838706 4712 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-28sm9" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.839000 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.849598 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.855552 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z55v5"] Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.870290 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2xxnh"] Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.871134 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.873194 4712 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-6hsqx" Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.897134 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9887h"] Jan 30 17:07:39 crc kubenswrapper[4712]: I0130 17:07:39.918929 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2xxnh"] Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.008750 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trm55\" (UniqueName: \"kubernetes.io/projected/52a11f64-b007-48ea-943a-0dc87304b75d-kube-api-access-trm55\") pod \"cert-manager-cainjector-cf98fcc89-9887h\" (UID: \"52a11f64-b007-48ea-943a-0dc87304b75d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.008875 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xvc\" (UniqueName: \"kubernetes.io/projected/b8cf7519-5513-43e8-98bb-b81e8d7c65e3-kube-api-access-29xvc\") pod \"cert-manager-webhook-687f57d79b-2xxnh\" (UID: \"b8cf7519-5513-43e8-98bb-b81e8d7c65e3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.008942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqdtw\" (UniqueName: \"kubernetes.io/projected/e2596ab3-5e15-4f02-b27f-36787aa5ebd8-kube-api-access-jqdtw\") pod \"cert-manager-858654f9db-z55v5\" (UID: \"e2596ab3-5e15-4f02-b27f-36787aa5ebd8\") " pod="cert-manager/cert-manager-858654f9db-z55v5" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.110515 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqdtw\" (UniqueName: \"kubernetes.io/projected/e2596ab3-5e15-4f02-b27f-36787aa5ebd8-kube-api-access-jqdtw\") pod \"cert-manager-858654f9db-z55v5\" (UID: \"e2596ab3-5e15-4f02-b27f-36787aa5ebd8\") " pod="cert-manager/cert-manager-858654f9db-z55v5" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.110671 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-trm55\" (UniqueName: \"kubernetes.io/projected/52a11f64-b007-48ea-943a-0dc87304b75d-kube-api-access-trm55\") pod \"cert-manager-cainjector-cf98fcc89-9887h\" (UID: \"52a11f64-b007-48ea-943a-0dc87304b75d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.110706 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29xvc\" (UniqueName: \"kubernetes.io/projected/b8cf7519-5513-43e8-98bb-b81e8d7c65e3-kube-api-access-29xvc\") pod \"cert-manager-webhook-687f57d79b-2xxnh\" (UID: \"b8cf7519-5513-43e8-98bb-b81e8d7c65e3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.128576 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29xvc\" (UniqueName: \"kubernetes.io/projected/b8cf7519-5513-43e8-98bb-b81e8d7c65e3-kube-api-access-29xvc\") pod \"cert-manager-webhook-687f57d79b-2xxnh\" (UID: \"b8cf7519-5513-43e8-98bb-b81e8d7c65e3\") " pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.130323 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqdtw\" (UniqueName: \"kubernetes.io/projected/e2596ab3-5e15-4f02-b27f-36787aa5ebd8-kube-api-access-jqdtw\") pod \"cert-manager-858654f9db-z55v5\" (UID: \"e2596ab3-5e15-4f02-b27f-36787aa5ebd8\") " pod="cert-manager/cert-manager-858654f9db-z55v5" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.141369 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-z55v5" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.143233 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-trm55\" (UniqueName: \"kubernetes.io/projected/52a11f64-b007-48ea-943a-0dc87304b75d-kube-api-access-trm55\") pod \"cert-manager-cainjector-cf98fcc89-9887h\" (UID: \"52a11f64-b007-48ea-943a-0dc87304b75d\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.151483 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.194531 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.456573 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-2xxnh"] Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.465137 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.591341 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-z55v5"] Jan 30 17:07:40 crc kubenswrapper[4712]: W0130 17:07:40.595301 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2596ab3_5e15_4f02_b27f_36787aa5ebd8.slice/crio-5b1c30fd0d45547327149c33c31856bda76d52aa2c7cfc3eea763a3e115eb0f6 WatchSource:0}: Error finding container 5b1c30fd0d45547327149c33c31856bda76d52aa2c7cfc3eea763a3e115eb0f6: Status 404 returned error can't find the container with id 5b1c30fd0d45547327149c33c31856bda76d52aa2c7cfc3eea763a3e115eb0f6 Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.599237 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-9887h"] Jan 30 17:07:40 crc kubenswrapper[4712]: W0130 17:07:40.601877 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod52a11f64_b007_48ea_943a_0dc87304b75d.slice/crio-1b27113afe6e54bb020bef8a9cd3a6ce1170525739deb8b113b89425d8391ef6 WatchSource:0}: Error finding container 1b27113afe6e54bb020bef8a9cd3a6ce1170525739deb8b113b89425d8391ef6: Status 404 returned error can't find the container with id 1b27113afe6e54bb020bef8a9cd3a6ce1170525739deb8b113b89425d8391ef6 Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.865438 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" event={"ID":"52a11f64-b007-48ea-943a-0dc87304b75d","Type":"ContainerStarted","Data":"1b27113afe6e54bb020bef8a9cd3a6ce1170525739deb8b113b89425d8391ef6"} Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.866528 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" event={"ID":"b8cf7519-5513-43e8-98bb-b81e8d7c65e3","Type":"ContainerStarted","Data":"369314abbe0231e9bcdaf3a9f28eec0eee63dec056041c727712b5929d369974"} Jan 30 17:07:40 crc kubenswrapper[4712]: I0130 17:07:40.867999 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z55v5" event={"ID":"e2596ab3-5e15-4f02-b27f-36787aa5ebd8","Type":"ContainerStarted","Data":"5b1c30fd0d45547327149c33c31856bda76d52aa2c7cfc3eea763a3e115eb0f6"} Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.895125 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" event={"ID":"b8cf7519-5513-43e8-98bb-b81e8d7c65e3","Type":"ContainerStarted","Data":"0b41665e227e26d19f86b22e78a9ded3e779d852ece66a9ca0daa628b2f6978d"} Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.896195 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.896279 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-z55v5" event={"ID":"e2596ab3-5e15-4f02-b27f-36787aa5ebd8","Type":"ContainerStarted","Data":"208684ab4c63d0d7166838ebc46c74c75fe92e2c38f31579751c666a58dc1ff4"} Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.897462 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" event={"ID":"52a11f64-b007-48ea-943a-0dc87304b75d","Type":"ContainerStarted","Data":"16af4de7493274f62d7db1400d86569c2538190eeaf7a44e0b7a64f17448ec3d"} Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.911352 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" podStartSLOduration=1.807027967 podStartE2EDuration="6.911335088s" podCreationTimestamp="2026-01-30 17:07:39 +0000 UTC" firstStartedPulling="2026-01-30 17:07:40.464960174 +0000 UTC m=+797.371969643" lastFinishedPulling="2026-01-30 17:07:45.569267295 +0000 UTC m=+802.476276764" observedRunningTime="2026-01-30 17:07:45.908684469 +0000 UTC m=+802.815693938" watchObservedRunningTime="2026-01-30 17:07:45.911335088 +0000 UTC m=+802.818344557" Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.924418 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-9887h" podStartSLOduration=2.089782986 podStartE2EDuration="6.924389583s" podCreationTimestamp="2026-01-30 17:07:39 +0000 UTC" firstStartedPulling="2026-01-30 17:07:40.604305847 +0000 UTC m=+797.511315316" lastFinishedPulling="2026-01-30 17:07:45.438912454 +0000 UTC m=+802.345921913" observedRunningTime="2026-01-30 17:07:45.922618308 +0000 UTC m=+802.829627777" watchObservedRunningTime="2026-01-30 17:07:45.924389583 +0000 UTC m=+802.831399052" Jan 30 17:07:45 crc kubenswrapper[4712]: I0130 17:07:45.947916 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-z55v5" podStartSLOduration=2.10630174 podStartE2EDuration="6.947898037s" podCreationTimestamp="2026-01-30 17:07:39 +0000 UTC" firstStartedPulling="2026-01-30 17:07:40.597268606 +0000 UTC m=+797.504278075" lastFinishedPulling="2026-01-30 17:07:45.438864913 +0000 UTC m=+802.345874372" observedRunningTime="2026-01-30 17:07:45.945023574 +0000 UTC m=+802.852033043" watchObservedRunningTime="2026-01-30 17:07:45.947898037 +0000 UTC m=+802.854907506" Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.904948 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-228xs"] Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.905721 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-controller" containerID="cri-o://c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.905847 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="nbdb" containerID="cri-o://155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.905882 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-acl-logging" containerID="cri-o://b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.905967 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-node" containerID="cri-o://0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.905953 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.906120 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="sbdb" containerID="cri-o://f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.906168 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="northd" containerID="cri-o://68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" gracePeriod=30 Jan 30 17:07:48 crc kubenswrapper[4712]: I0130 17:07:48.944063 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" containerID="cri-o://7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" gracePeriod=30 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.244473 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/3.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.246629 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovn-acl-logging/0.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.247276 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovn-controller/0.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.247830 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307353 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-tkqgt"] Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307629 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307644 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307657 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="sbdb" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307665 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="sbdb" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307672 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307683 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307693 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307701 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307712 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-node" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307719 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-node" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307733 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="northd" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307740 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="northd" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307750 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307757 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307767 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-acl-logging" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307774 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-acl-logging" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307784 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="nbdb" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307812 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="nbdb" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307823 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307831 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307840 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307847 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.307860 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kubecfg-setup" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307867 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kubecfg-setup" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307979 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-node" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.307993 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="nbdb" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308003 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="kube-rbac-proxy-ovn-metrics" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308011 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308020 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308029 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308041 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-acl-logging" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308048 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="northd" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308056 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovn-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308065 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="sbdb" Jan 30 17:07:49 crc kubenswrapper[4712]: E0130 17:07:49.308179 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308188 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308305 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.308538 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93651476-fd00-4a9e-934a-73537f1d103e" containerName="ovnkube-controller" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.310337 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325681 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-slash\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325725 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-node-log\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325742 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-ovn\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325763 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-etc-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325783 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-systemd-units\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325826 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325917 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-kubelet\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.325993 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326020 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-ovnkube-config\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326038 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-run-netns\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326062 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-ovnkube-script-lib\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326083 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9cdb51da-87ec-417f-903e-1e238fea01ed-ovn-node-metrics-cert\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326107 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-systemd\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326128 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-cni-bin\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326151 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-log-socket\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326180 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-env-overrides\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326259 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-var-lib-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326283 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-cni-netd\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326307 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.326330 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwzf7\" (UniqueName: \"kubernetes.io/projected/9cdb51da-87ec-417f-903e-1e238fea01ed-kube-api-access-lwzf7\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.426763 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-kubelet\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427015 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-script-lib\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.426916 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427087 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-slash\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427163 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-env-overrides\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427197 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-config\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427212 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-bin\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427232 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-netns\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427258 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-ovn\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427291 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-var-lib-cni-networks-ovn-kubernetes\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427320 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-var-lib-openvswitch\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427317 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427338 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93651476-fd00-4a9e-934a-73537f1d103e-ovn-node-metrics-cert\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427354 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-node-log\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427369 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427392 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rxzgm\" (UniqueName: \"kubernetes.io/projected/93651476-fd00-4a9e-934a-73537f1d103e-kube-api-access-rxzgm\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427406 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427414 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-systemd-units\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427432 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427458 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-systemd\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427484 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-etc-openvswitch\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427510 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-log-socket\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427530 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-netd\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427554 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-openvswitch\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427477 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427584 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-ovn-kubernetes\") pod \"93651476-fd00-4a9e-934a-73537f1d103e\" (UID: \"93651476-fd00-4a9e-934a-73537f1d103e\") " Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427685 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-slash\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427723 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-node-log\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427747 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-ovn\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427764 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-slash\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427779 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-etc-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427835 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-etc-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427843 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-systemd-units\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427587 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427599 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427851 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-node-log\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427889 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-ovn\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427614 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-log-socket" (OuterVolumeSpecName: "log-socket") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427888 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427570 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427916 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427655 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427931 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-kubelet\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427955 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-kubelet\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427962 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-systemd-units\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427974 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428012 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-ovnkube-config\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428037 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-run-netns\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428011 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-run-ovn-kubernetes\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428064 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-ovnkube-script-lib\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428087 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9cdb51da-87ec-417f-903e-1e238fea01ed-ovn-node-metrics-cert\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428111 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-run-netns\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428115 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-systemd\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428148 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-systemd\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427643 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-node-log" (OuterVolumeSpecName: "node-log") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427691 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427701 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427716 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.427733 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428195 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-cni-bin\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428238 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-log-socket\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428283 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-log-socket\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428305 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-cni-bin\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428370 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-env-overrides\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428471 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-var-lib-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428494 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-cni-netd\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428525 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428558 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwzf7\" (UniqueName: \"kubernetes.io/projected/9cdb51da-87ec-417f-903e-1e238fea01ed-kube-api-access-lwzf7\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428695 4712 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428708 4712 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-log-socket\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428718 4712 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428727 4712 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428740 4712 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428751 4712 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428762 4712 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428771 4712 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428782 4712 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/93651476-fd00-4a9e-934a-73537f1d103e-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428810 4712 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428821 4712 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428831 4712 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428842 4712 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428852 4712 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428853 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-ovnkube-config\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428916 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-ovnkube-script-lib\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428863 4712 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-node-log\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.428974 4712 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.429005 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-host-cni-netd\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.429033 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-var-lib-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.429059 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9cdb51da-87ec-417f-903e-1e238fea01ed-run-openvswitch\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.429457 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9cdb51da-87ec-417f-903e-1e238fea01ed-env-overrides\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.429982 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-slash" (OuterVolumeSpecName: "host-slash") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.432346 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9cdb51da-87ec-417f-903e-1e238fea01ed-ovn-node-metrics-cert\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.432972 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93651476-fd00-4a9e-934a-73537f1d103e-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.433369 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93651476-fd00-4a9e-934a-73537f1d103e-kube-api-access-rxzgm" (OuterVolumeSpecName: "kube-api-access-rxzgm") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "kube-api-access-rxzgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.442145 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "93651476-fd00-4a9e-934a-73537f1d103e" (UID: "93651476-fd00-4a9e-934a-73537f1d103e"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.449089 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwzf7\" (UniqueName: \"kubernetes.io/projected/9cdb51da-87ec-417f-903e-1e238fea01ed-kube-api-access-lwzf7\") pod \"ovnkube-node-tkqgt\" (UID: \"9cdb51da-87ec-417f-903e-1e238fea01ed\") " pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.530157 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rxzgm\" (UniqueName: \"kubernetes.io/projected/93651476-fd00-4a9e-934a-73537f1d103e-kube-api-access-rxzgm\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.530201 4712 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.530221 4712 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/93651476-fd00-4a9e-934a-73537f1d103e-host-slash\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.530234 4712 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/93651476-fd00-4a9e-934a-73537f1d103e-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.623496 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:49 crc kubenswrapper[4712]: W0130 17:07:49.672997 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9cdb51da_87ec_417f_903e_1e238fea01ed.slice/crio-f30debdc2f13f3b30c5eb213c95b6795d388ecc6be4221544ded5f0eba9b8b47 WatchSource:0}: Error finding container f30debdc2f13f3b30c5eb213c95b6795d388ecc6be4221544ded5f0eba9b8b47: Status 404 returned error can't find the container with id f30debdc2f13f3b30c5eb213c95b6795d388ecc6be4221544ded5f0eba9b8b47 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.924338 4712 generic.go:334] "Generic (PLEG): container finished" podID="9cdb51da-87ec-417f-903e-1e238fea01ed" containerID="a7fac1ccff1c114aa136d88b0f5b04ce0afebe879bd3b97912b368261c6a0acd" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.924442 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerDied","Data":"a7fac1ccff1c114aa136d88b0f5b04ce0afebe879bd3b97912b368261c6a0acd"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.924864 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"f30debdc2f13f3b30c5eb213c95b6795d388ecc6be4221544ded5f0eba9b8b47"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.926755 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/2.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.927299 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/1.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.927351 4712 generic.go:334] "Generic (PLEG): container finished" podID="dcd71c7c-942c-4c29-969e-45d946f356c8" containerID="58d9e6895e721bf0d4cfb7b391d4273fbf98d44cab746f53b51c0dab20ad4c4b" exitCode=2 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.927382 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerDied","Data":"58d9e6895e721bf0d4cfb7b391d4273fbf98d44cab746f53b51c0dab20ad4c4b"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.927427 4712 scope.go:117] "RemoveContainer" containerID="383cb9db140e32a25c872a2355da98c9b6e39191bc10d76b4420e18580464c00" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.928025 4712 scope.go:117] "RemoveContainer" containerID="58d9e6895e721bf0d4cfb7b391d4273fbf98d44cab746f53b51c0dab20ad4c4b" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.935114 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovnkube-controller/3.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.945333 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovn-acl-logging/0.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.945900 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-228xs_93651476-fd00-4a9e-934a-73537f1d103e/ovn-controller/0.log" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946307 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946331 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946338 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946345 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946352 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946358 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" exitCode=0 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946364 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" exitCode=143 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946370 4712 generic.go:334] "Generic (PLEG): container finished" podID="93651476-fd00-4a9e-934a-73537f1d103e" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" exitCode=143 Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946390 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946414 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946423 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946432 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946441 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946449 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946460 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946483 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946488 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946494 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946499 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946504 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946509 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946514 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946519 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946524 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946531 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946538 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946543 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946548 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946554 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946560 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946566 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946571 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946575 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946581 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946585 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946592 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946603 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946609 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946614 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946620 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946625 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946630 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946635 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946640 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946645 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946651 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946658 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" event={"ID":"93651476-fd00-4a9e-934a-73537f1d103e","Type":"ContainerDied","Data":"da7cda9c930e78f721bfcb83b8fcf25c1e8d9e6c5a59141c005af665adcf7f87"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946666 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946672 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946677 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946682 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946689 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946694 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946699 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946703 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946708 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946713 4712 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.946789 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-228xs" Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.979669 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-228xs"] Jan 30 17:07:49 crc kubenswrapper[4712]: I0130 17:07:49.986240 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-228xs"] Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.093758 4712 scope.go:117] "RemoveContainer" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.111662 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.134894 4712 scope.go:117] "RemoveContainer" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.157202 4712 scope.go:117] "RemoveContainer" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.169066 4712 scope.go:117] "RemoveContainer" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.179448 4712 scope.go:117] "RemoveContainer" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.190310 4712 scope.go:117] "RemoveContainer" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.196942 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.310766 4712 scope.go:117] "RemoveContainer" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.325002 4712 scope.go:117] "RemoveContainer" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.358290 4712 scope.go:117] "RemoveContainer" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.376670 4712 scope.go:117] "RemoveContainer" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.377284 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": container with ID starting with 7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19 not found: ID does not exist" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.377349 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} err="failed to get container status \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": rpc error: code = NotFound desc = could not find container \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": container with ID starting with 7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.377393 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.377952 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": container with ID starting with 12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a not found: ID does not exist" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.377986 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} err="failed to get container status \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": rpc error: code = NotFound desc = could not find container \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": container with ID starting with 12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.378004 4712 scope.go:117] "RemoveContainer" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.378398 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": container with ID starting with f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af not found: ID does not exist" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.378429 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} err="failed to get container status \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": rpc error: code = NotFound desc = could not find container \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": container with ID starting with f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.378445 4712 scope.go:117] "RemoveContainer" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.378855 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": container with ID starting with 155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e not found: ID does not exist" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.378883 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} err="failed to get container status \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": rpc error: code = NotFound desc = could not find container \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": container with ID starting with 155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.378900 4712 scope.go:117] "RemoveContainer" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.379291 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": container with ID starting with 68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098 not found: ID does not exist" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.379314 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} err="failed to get container status \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": rpc error: code = NotFound desc = could not find container \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": container with ID starting with 68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.379332 4712 scope.go:117] "RemoveContainer" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.379620 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": container with ID starting with 3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637 not found: ID does not exist" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.379652 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} err="failed to get container status \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": rpc error: code = NotFound desc = could not find container \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": container with ID starting with 3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.379668 4712 scope.go:117] "RemoveContainer" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.380025 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": container with ID starting with 0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96 not found: ID does not exist" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.380052 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} err="failed to get container status \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": rpc error: code = NotFound desc = could not find container \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": container with ID starting with 0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.380068 4712 scope.go:117] "RemoveContainer" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.380430 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": container with ID starting with b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e not found: ID does not exist" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.380459 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} err="failed to get container status \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": rpc error: code = NotFound desc = could not find container \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": container with ID starting with b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.380480 4712 scope.go:117] "RemoveContainer" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.380811 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": container with ID starting with c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517 not found: ID does not exist" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.380841 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} err="failed to get container status \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": rpc error: code = NotFound desc = could not find container \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": container with ID starting with c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.380857 4712 scope.go:117] "RemoveContainer" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" Jan 30 17:07:50 crc kubenswrapper[4712]: E0130 17:07:50.381264 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": container with ID starting with 8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5 not found: ID does not exist" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.381291 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} err="failed to get container status \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": rpc error: code = NotFound desc = could not find container \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": container with ID starting with 8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.381306 4712 scope.go:117] "RemoveContainer" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.381634 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} err="failed to get container status \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": rpc error: code = NotFound desc = could not find container \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": container with ID starting with 7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.381654 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382075 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} err="failed to get container status \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": rpc error: code = NotFound desc = could not find container \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": container with ID starting with 12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382096 4712 scope.go:117] "RemoveContainer" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382384 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} err="failed to get container status \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": rpc error: code = NotFound desc = could not find container \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": container with ID starting with f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382406 4712 scope.go:117] "RemoveContainer" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382684 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} err="failed to get container status \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": rpc error: code = NotFound desc = could not find container \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": container with ID starting with 155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382702 4712 scope.go:117] "RemoveContainer" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382954 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} err="failed to get container status \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": rpc error: code = NotFound desc = could not find container \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": container with ID starting with 68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.382975 4712 scope.go:117] "RemoveContainer" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.383220 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} err="failed to get container status \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": rpc error: code = NotFound desc = could not find container \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": container with ID starting with 3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.383240 4712 scope.go:117] "RemoveContainer" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.383514 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} err="failed to get container status \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": rpc error: code = NotFound desc = could not find container \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": container with ID starting with 0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.383537 4712 scope.go:117] "RemoveContainer" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.383839 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} err="failed to get container status \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": rpc error: code = NotFound desc = could not find container \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": container with ID starting with b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.383866 4712 scope.go:117] "RemoveContainer" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.384114 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} err="failed to get container status \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": rpc error: code = NotFound desc = could not find container \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": container with ID starting with c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.384133 4712 scope.go:117] "RemoveContainer" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.384576 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} err="failed to get container status \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": rpc error: code = NotFound desc = could not find container \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": container with ID starting with 8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.384599 4712 scope.go:117] "RemoveContainer" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.385099 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} err="failed to get container status \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": rpc error: code = NotFound desc = could not find container \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": container with ID starting with 7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.385146 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.385534 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} err="failed to get container status \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": rpc error: code = NotFound desc = could not find container \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": container with ID starting with 12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.385564 4712 scope.go:117] "RemoveContainer" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.385888 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} err="failed to get container status \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": rpc error: code = NotFound desc = could not find container \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": container with ID starting with f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.385914 4712 scope.go:117] "RemoveContainer" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.386240 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} err="failed to get container status \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": rpc error: code = NotFound desc = could not find container \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": container with ID starting with 155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.386266 4712 scope.go:117] "RemoveContainer" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.386491 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} err="failed to get container status \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": rpc error: code = NotFound desc = could not find container \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": container with ID starting with 68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.386508 4712 scope.go:117] "RemoveContainer" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.386792 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} err="failed to get container status \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": rpc error: code = NotFound desc = could not find container \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": container with ID starting with 3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.386821 4712 scope.go:117] "RemoveContainer" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.387118 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} err="failed to get container status \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": rpc error: code = NotFound desc = could not find container \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": container with ID starting with 0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.387150 4712 scope.go:117] "RemoveContainer" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.387530 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} err="failed to get container status \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": rpc error: code = NotFound desc = could not find container \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": container with ID starting with b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.387559 4712 scope.go:117] "RemoveContainer" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.388376 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} err="failed to get container status \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": rpc error: code = NotFound desc = could not find container \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": container with ID starting with c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.388419 4712 scope.go:117] "RemoveContainer" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.388766 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} err="failed to get container status \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": rpc error: code = NotFound desc = could not find container \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": container with ID starting with 8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.388846 4712 scope.go:117] "RemoveContainer" containerID="7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.389206 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19"} err="failed to get container status \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": rpc error: code = NotFound desc = could not find container \"7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19\": container with ID starting with 7c646b9dd5f5e5ac69308e3a6eb8f53ffda989232f3787fb92f73a3dd5aafc19 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.389240 4712 scope.go:117] "RemoveContainer" containerID="12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.389501 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a"} err="failed to get container status \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": rpc error: code = NotFound desc = could not find container \"12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a\": container with ID starting with 12fbba37359e054d06f64b1458a264cbe0c885567f33a867929e350a0de0e52a not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.389528 4712 scope.go:117] "RemoveContainer" containerID="f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.389857 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af"} err="failed to get container status \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": rpc error: code = NotFound desc = could not find container \"f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af\": container with ID starting with f9b5ca38721359ce499f8d93d97b92e1e61aee79e43e7f385fcd573c98dcf9af not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.389886 4712 scope.go:117] "RemoveContainer" containerID="155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.394982 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e"} err="failed to get container status \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": rpc error: code = NotFound desc = could not find container \"155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e\": container with ID starting with 155e1afbba827d34b13c7d77a6565e7a5974eefa2eea2e521f29db64e7cc653e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.395035 4712 scope.go:117] "RemoveContainer" containerID="68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.395462 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098"} err="failed to get container status \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": rpc error: code = NotFound desc = could not find container \"68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098\": container with ID starting with 68749138bfa8a1623a8b933f99509b87424dacb936da270acb7abc9514819098 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.395504 4712 scope.go:117] "RemoveContainer" containerID="3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.395928 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637"} err="failed to get container status \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": rpc error: code = NotFound desc = could not find container \"3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637\": container with ID starting with 3a9ead8ae662f48ed8063b3b830d47cc63d4c52eaf48a7be3b640f9be078e637 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.395957 4712 scope.go:117] "RemoveContainer" containerID="0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.396263 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96"} err="failed to get container status \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": rpc error: code = NotFound desc = could not find container \"0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96\": container with ID starting with 0211c84e21a906c1fe3f120b0229bef0a9ca9f2bf1f947e9ce60ec5506b63b96 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.396294 4712 scope.go:117] "RemoveContainer" containerID="b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.396576 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e"} err="failed to get container status \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": rpc error: code = NotFound desc = could not find container \"b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e\": container with ID starting with b75e6059699f458a9a2747f2a4f2504dbb18aabbdaed96ff34baef9427aa047e not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.396597 4712 scope.go:117] "RemoveContainer" containerID="c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.396952 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517"} err="failed to get container status \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": rpc error: code = NotFound desc = could not find container \"c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517\": container with ID starting with c802e811b4dafc1e9a7aa2644d8d706bb334dfff937804f62840e841616b6517 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.396975 4712 scope.go:117] "RemoveContainer" containerID="8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.397220 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5"} err="failed to get container status \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": rpc error: code = NotFound desc = could not find container \"8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5\": container with ID starting with 8abf083baefb5ca057b281084876d060be8178127d97fd162781494e8628a4f5 not found: ID does not exist" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.954994 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-9vnxv_dcd71c7c-942c-4c29-969e-45d946f356c8/kube-multus/2.log" Jan 30 17:07:50 crc kubenswrapper[4712]: I0130 17:07:50.956146 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-9vnxv" event={"ID":"dcd71c7c-942c-4c29-969e-45d946f356c8","Type":"ContainerStarted","Data":"62dee00c0763ea95342857339ce523b45305b015f00074f46e3888d12a6cc6e7"} Jan 30 17:07:51 crc kubenswrapper[4712]: I0130 17:07:51.806697 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93651476-fd00-4a9e-934a-73537f1d103e" path="/var/lib/kubelet/pods/93651476-fd00-4a9e-934a-73537f1d103e/volumes" Jan 30 17:07:51 crc kubenswrapper[4712]: I0130 17:07:51.967020 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"18a15cb48448ae0b8b280d50f4964759484c56e6274ae758da4a36080cfed7ee"} Jan 30 17:07:51 crc kubenswrapper[4712]: I0130 17:07:51.967057 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"df5053cc24959bdf75b334eb9c73b18a630694d59fada75cae2d0026e74e43da"} Jan 30 17:07:51 crc kubenswrapper[4712]: I0130 17:07:51.967068 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"903bfb3e7c06712f281c3d230c782eb7281523315aa284ccd87da766f908c2d4"} Jan 30 17:07:51 crc kubenswrapper[4712]: I0130 17:07:51.967078 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"94b13a88b8e6c180c62f93e7e125d823a2c4b041cc6f8ee8e3b8f3e8bc976e17"} Jan 30 17:07:52 crc kubenswrapper[4712]: I0130 17:07:52.977244 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"e400e08769e655bea6fd0aa282e5a8b9ddbf61306278b5a503816579673145fb"} Jan 30 17:07:52 crc kubenswrapper[4712]: I0130 17:07:52.978415 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"56b41101186d7346b823a657d2c8bc9848b1419bb09471cd3a5466b8d27b0fb3"} Jan 30 17:07:55 crc kubenswrapper[4712]: I0130 17:07:55.004052 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"592f588e8e5fe9ca9e15f7c42526aae139ef1fc217cc89087477d4eeb2dd2601"} Jan 30 17:07:57 crc kubenswrapper[4712]: I0130 17:07:57.018050 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" event={"ID":"9cdb51da-87ec-417f-903e-1e238fea01ed","Type":"ContainerStarted","Data":"8550fa41249ba914f9ee550ffdbd8a3630a662c92c1c3f0296e773662dd77c55"} Jan 30 17:07:57 crc kubenswrapper[4712]: I0130 17:07:57.055619 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" podStartSLOduration=8.055598209 podStartE2EDuration="8.055598209s" podCreationTimestamp="2026-01-30 17:07:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:07:57.049746908 +0000 UTC m=+813.956756397" watchObservedRunningTime="2026-01-30 17:07:57.055598209 +0000 UTC m=+813.962607678" Jan 30 17:07:58 crc kubenswrapper[4712]: I0130 17:07:58.023887 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:58 crc kubenswrapper[4712]: I0130 17:07:58.024210 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:58 crc kubenswrapper[4712]: I0130 17:07:58.024221 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:58 crc kubenswrapper[4712]: I0130 17:07:58.049629 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:07:58 crc kubenswrapper[4712]: I0130 17:07:58.050612 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:08:06 crc kubenswrapper[4712]: I0130 17:08:06.273200 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:08:06 crc kubenswrapper[4712]: I0130 17:08:06.273917 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:08:06 crc kubenswrapper[4712]: I0130 17:08:06.273966 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:08:06 crc kubenswrapper[4712]: I0130 17:08:06.274781 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ccb7e72de28daa8c77382ed0d9f3fcdc643489cf9e4bd09a65cf85b38be2156"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:08:06 crc kubenswrapper[4712]: I0130 17:08:06.274856 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://2ccb7e72de28daa8c77382ed0d9f3fcdc643489cf9e4bd09a65cf85b38be2156" gracePeriod=600 Jan 30 17:08:07 crc kubenswrapper[4712]: I0130 17:08:07.089290 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="2ccb7e72de28daa8c77382ed0d9f3fcdc643489cf9e4bd09a65cf85b38be2156" exitCode=0 Jan 30 17:08:07 crc kubenswrapper[4712]: I0130 17:08:07.089651 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"2ccb7e72de28daa8c77382ed0d9f3fcdc643489cf9e4bd09a65cf85b38be2156"} Jan 30 17:08:07 crc kubenswrapper[4712]: I0130 17:08:07.089691 4712 scope.go:117] "RemoveContainer" containerID="203c45c35096b1ae0165edae10567c9ba80cfb23bd72c48e4423c2b2e84eb646" Jan 30 17:08:08 crc kubenswrapper[4712]: I0130 17:08:08.096870 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"1f74eb8e5d1037eaec314ae58dc333985d1e77823d3293834609e8af2e98478d"} Jan 30 17:08:19 crc kubenswrapper[4712]: I0130 17:08:19.643548 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-tkqgt" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.700579 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g"] Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.703163 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.719445 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g"] Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.719836 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.840990 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.841086 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psw2b\" (UniqueName: \"kubernetes.io/projected/55fa88c7-5d3f-4787-ae79-b4237a68e191-kube-api-access-psw2b\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.841491 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.943126 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psw2b\" (UniqueName: \"kubernetes.io/projected/55fa88c7-5d3f-4787-ae79-b4237a68e191-kube-api-access-psw2b\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.943216 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.943270 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.943983 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.944047 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:28 crc kubenswrapper[4712]: I0130 17:08:28.977654 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psw2b\" (UniqueName: \"kubernetes.io/projected/55fa88c7-5d3f-4787-ae79-b4237a68e191-kube-api-access-psw2b\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:29 crc kubenswrapper[4712]: I0130 17:08:29.022245 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:29 crc kubenswrapper[4712]: I0130 17:08:29.216805 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g"] Jan 30 17:08:30 crc kubenswrapper[4712]: I0130 17:08:30.209457 4712 generic.go:334] "Generic (PLEG): container finished" podID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerID="3a41ca1f207792915d8e5356a449c12831237a307d1165adcb6a149ef7786944" exitCode=0 Jan 30 17:08:30 crc kubenswrapper[4712]: I0130 17:08:30.209701 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" event={"ID":"55fa88c7-5d3f-4787-ae79-b4237a68e191","Type":"ContainerDied","Data":"3a41ca1f207792915d8e5356a449c12831237a307d1165adcb6a149ef7786944"} Jan 30 17:08:30 crc kubenswrapper[4712]: I0130 17:08:30.209744 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" event={"ID":"55fa88c7-5d3f-4787-ae79-b4237a68e191","Type":"ContainerStarted","Data":"76506323728c37cd2cb1348f6869b238532e18b66dbed01556c5baafe72093c4"} Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.063696 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6vhb"] Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.065214 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.071467 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6vhb"] Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.071513 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6lvl\" (UniqueName: \"kubernetes.io/projected/bf5d562a-7404-4053-85dc-05429d82026c-kube-api-access-b6lvl\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.071583 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-catalog-content\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.071610 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-utilities\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.172342 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6lvl\" (UniqueName: \"kubernetes.io/projected/bf5d562a-7404-4053-85dc-05429d82026c-kube-api-access-b6lvl\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.172400 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-catalog-content\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.172421 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-utilities\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.172813 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-utilities\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.172900 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-catalog-content\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.189981 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6lvl\" (UniqueName: \"kubernetes.io/projected/bf5d562a-7404-4053-85dc-05429d82026c-kube-api-access-b6lvl\") pod \"redhat-operators-c6vhb\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.384975 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:31 crc kubenswrapper[4712]: I0130 17:08:31.623733 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6vhb"] Jan 30 17:08:31 crc kubenswrapper[4712]: W0130 17:08:31.627224 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf5d562a_7404_4053_85dc_05429d82026c.slice/crio-2597a7451d0eb8b4ec21b5f33108bfcae1334f9e6c1a4e08b9d404e7d38a9238 WatchSource:0}: Error finding container 2597a7451d0eb8b4ec21b5f33108bfcae1334f9e6c1a4e08b9d404e7d38a9238: Status 404 returned error can't find the container with id 2597a7451d0eb8b4ec21b5f33108bfcae1334f9e6c1a4e08b9d404e7d38a9238 Jan 30 17:08:32 crc kubenswrapper[4712]: I0130 17:08:32.224031 4712 generic.go:334] "Generic (PLEG): container finished" podID="bf5d562a-7404-4053-85dc-05429d82026c" containerID="f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7" exitCode=0 Jan 30 17:08:32 crc kubenswrapper[4712]: I0130 17:08:32.224307 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vhb" event={"ID":"bf5d562a-7404-4053-85dc-05429d82026c","Type":"ContainerDied","Data":"f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7"} Jan 30 17:08:32 crc kubenswrapper[4712]: I0130 17:08:32.224333 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vhb" event={"ID":"bf5d562a-7404-4053-85dc-05429d82026c","Type":"ContainerStarted","Data":"2597a7451d0eb8b4ec21b5f33108bfcae1334f9e6c1a4e08b9d404e7d38a9238"} Jan 30 17:08:32 crc kubenswrapper[4712]: I0130 17:08:32.227786 4712 generic.go:334] "Generic (PLEG): container finished" podID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerID="008ab807b5e9f101514461aafe7e0f60d6008af1561e30ad9253061c4eed5ad5" exitCode=0 Jan 30 17:08:32 crc kubenswrapper[4712]: I0130 17:08:32.227836 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" event={"ID":"55fa88c7-5d3f-4787-ae79-b4237a68e191","Type":"ContainerDied","Data":"008ab807b5e9f101514461aafe7e0f60d6008af1561e30ad9253061c4eed5ad5"} Jan 30 17:08:33 crc kubenswrapper[4712]: I0130 17:08:33.240711 4712 generic.go:334] "Generic (PLEG): container finished" podID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerID="376753a4dba96728b86fbcfad95c8b4ddb218d856dfe70e85746a264cff4de3f" exitCode=0 Jan 30 17:08:33 crc kubenswrapper[4712]: I0130 17:08:33.240767 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" event={"ID":"55fa88c7-5d3f-4787-ae79-b4237a68e191","Type":"ContainerDied","Data":"376753a4dba96728b86fbcfad95c8b4ddb218d856dfe70e85746a264cff4de3f"} Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.248791 4712 generic.go:334] "Generic (PLEG): container finished" podID="bf5d562a-7404-4053-85dc-05429d82026c" containerID="4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875" exitCode=0 Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.249040 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vhb" event={"ID":"bf5d562a-7404-4053-85dc-05429d82026c","Type":"ContainerDied","Data":"4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875"} Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.462434 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.626381 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-util\") pod \"55fa88c7-5d3f-4787-ae79-b4237a68e191\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.626672 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-bundle\") pod \"55fa88c7-5d3f-4787-ae79-b4237a68e191\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.626726 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psw2b\" (UniqueName: \"kubernetes.io/projected/55fa88c7-5d3f-4787-ae79-b4237a68e191-kube-api-access-psw2b\") pod \"55fa88c7-5d3f-4787-ae79-b4237a68e191\" (UID: \"55fa88c7-5d3f-4787-ae79-b4237a68e191\") " Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.627363 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-bundle" (OuterVolumeSpecName: "bundle") pod "55fa88c7-5d3f-4787-ae79-b4237a68e191" (UID: "55fa88c7-5d3f-4787-ae79-b4237a68e191"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.636271 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55fa88c7-5d3f-4787-ae79-b4237a68e191-kube-api-access-psw2b" (OuterVolumeSpecName: "kube-api-access-psw2b") pod "55fa88c7-5d3f-4787-ae79-b4237a68e191" (UID: "55fa88c7-5d3f-4787-ae79-b4237a68e191"). InnerVolumeSpecName "kube-api-access-psw2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.639231 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-util" (OuterVolumeSpecName: "util") pod "55fa88c7-5d3f-4787-ae79-b4237a68e191" (UID: "55fa88c7-5d3f-4787-ae79-b4237a68e191"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.728602 4712 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.728639 4712 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/55fa88c7-5d3f-4787-ae79-b4237a68e191-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:34 crc kubenswrapper[4712]: I0130 17:08:34.728648 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-psw2b\" (UniqueName: \"kubernetes.io/projected/55fa88c7-5d3f-4787-ae79-b4237a68e191-kube-api-access-psw2b\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:35 crc kubenswrapper[4712]: I0130 17:08:35.256086 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vhb" event={"ID":"bf5d562a-7404-4053-85dc-05429d82026c","Type":"ContainerStarted","Data":"17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b"} Jan 30 17:08:35 crc kubenswrapper[4712]: I0130 17:08:35.259168 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" event={"ID":"55fa88c7-5d3f-4787-ae79-b4237a68e191","Type":"ContainerDied","Data":"76506323728c37cd2cb1348f6869b238532e18b66dbed01556c5baafe72093c4"} Jan 30 17:08:35 crc kubenswrapper[4712]: I0130 17:08:35.259199 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76506323728c37cd2cb1348f6869b238532e18b66dbed01556c5baafe72093c4" Jan 30 17:08:35 crc kubenswrapper[4712]: I0130 17:08:35.259260 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g" Jan 30 17:08:35 crc kubenswrapper[4712]: I0130 17:08:35.282370 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6vhb" podStartSLOduration=1.642555091 podStartE2EDuration="4.282348379s" podCreationTimestamp="2026-01-30 17:08:31 +0000 UTC" firstStartedPulling="2026-01-30 17:08:32.225988203 +0000 UTC m=+849.132997672" lastFinishedPulling="2026-01-30 17:08:34.865781471 +0000 UTC m=+851.772790960" observedRunningTime="2026-01-30 17:08:35.278041538 +0000 UTC m=+852.185051007" watchObservedRunningTime="2026-01-30 17:08:35.282348379 +0000 UTC m=+852.189357848" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.159058 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ngc72"] Jan 30 17:08:39 crc kubenswrapper[4712]: E0130 17:08:39.159787 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="pull" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.159821 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="pull" Jan 30 17:08:39 crc kubenswrapper[4712]: E0130 17:08:39.159850 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="extract" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.159858 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="extract" Jan 30 17:08:39 crc kubenswrapper[4712]: E0130 17:08:39.159870 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="util" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.159877 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="util" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.159997 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="55fa88c7-5d3f-4787-ae79-b4237a68e191" containerName="extract" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.160445 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.164419 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.164478 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.164542 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-8rvcd" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.169916 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ngc72"] Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.195887 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2m2\" (UniqueName: \"kubernetes.io/projected/043c21c8-23c1-4c11-b636-b5f34f6aa30b-kube-api-access-qk2m2\") pod \"nmstate-operator-646758c888-ngc72\" (UID: \"043c21c8-23c1-4c11-b636-b5f34f6aa30b\") " pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.297870 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2m2\" (UniqueName: \"kubernetes.io/projected/043c21c8-23c1-4c11-b636-b5f34f6aa30b-kube-api-access-qk2m2\") pod \"nmstate-operator-646758c888-ngc72\" (UID: \"043c21c8-23c1-4c11-b636-b5f34f6aa30b\") " pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.316351 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2m2\" (UniqueName: \"kubernetes.io/projected/043c21c8-23c1-4c11-b636-b5f34f6aa30b-kube-api-access-qk2m2\") pod \"nmstate-operator-646758c888-ngc72\" (UID: \"043c21c8-23c1-4c11-b636-b5f34f6aa30b\") " pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.480612 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" Jan 30 17:08:39 crc kubenswrapper[4712]: I0130 17:08:39.971743 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ngc72"] Jan 30 17:08:40 crc kubenswrapper[4712]: I0130 17:08:40.290581 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" event={"ID":"043c21c8-23c1-4c11-b636-b5f34f6aa30b","Type":"ContainerStarted","Data":"af80980ba63cd4f5fe916a02f2e0c8e7a54e86d6efe16cd07b5b8bddf9cde781"} Jan 30 17:08:41 crc kubenswrapper[4712]: I0130 17:08:41.385836 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:41 crc kubenswrapper[4712]: I0130 17:08:41.386151 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:41 crc kubenswrapper[4712]: I0130 17:08:41.432414 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:42 crc kubenswrapper[4712]: I0130 17:08:42.348601 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.049398 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6vhb"] Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.311650 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" event={"ID":"043c21c8-23c1-4c11-b636-b5f34f6aa30b","Type":"ContainerStarted","Data":"3729de825d51e969966a6f4d4419ca5cf607f0dd21fc5dffaf1c99dd74b63bfe"} Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.311909 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6vhb" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="registry-server" containerID="cri-o://17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b" gracePeriod=2 Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.758144 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.778021 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-ngc72" podStartSLOduration=2.274349339 podStartE2EDuration="5.778004134s" podCreationTimestamp="2026-01-30 17:08:39 +0000 UTC" firstStartedPulling="2026-01-30 17:08:39.989596633 +0000 UTC m=+856.896606102" lastFinishedPulling="2026-01-30 17:08:43.493251428 +0000 UTC m=+860.400260897" observedRunningTime="2026-01-30 17:08:44.339375549 +0000 UTC m=+861.246385018" watchObservedRunningTime="2026-01-30 17:08:44.778004134 +0000 UTC m=+861.685013603" Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.806881 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b6lvl\" (UniqueName: \"kubernetes.io/projected/bf5d562a-7404-4053-85dc-05429d82026c-kube-api-access-b6lvl\") pod \"bf5d562a-7404-4053-85dc-05429d82026c\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.807033 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-utilities\") pod \"bf5d562a-7404-4053-85dc-05429d82026c\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.807097 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-catalog-content\") pod \"bf5d562a-7404-4053-85dc-05429d82026c\" (UID: \"bf5d562a-7404-4053-85dc-05429d82026c\") " Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.808380 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-utilities" (OuterVolumeSpecName: "utilities") pod "bf5d562a-7404-4053-85dc-05429d82026c" (UID: "bf5d562a-7404-4053-85dc-05429d82026c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.829155 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5d562a-7404-4053-85dc-05429d82026c-kube-api-access-b6lvl" (OuterVolumeSpecName: "kube-api-access-b6lvl") pod "bf5d562a-7404-4053-85dc-05429d82026c" (UID: "bf5d562a-7404-4053-85dc-05429d82026c"). InnerVolumeSpecName "kube-api-access-b6lvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.907885 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:44 crc kubenswrapper[4712]: I0130 17:08:44.907917 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b6lvl\" (UniqueName: \"kubernetes.io/projected/bf5d562a-7404-4053-85dc-05429d82026c-kube-api-access-b6lvl\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.317815 4712 generic.go:334] "Generic (PLEG): container finished" podID="bf5d562a-7404-4053-85dc-05429d82026c" containerID="17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b" exitCode=0 Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.317892 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6vhb" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.317922 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vhb" event={"ID":"bf5d562a-7404-4053-85dc-05429d82026c","Type":"ContainerDied","Data":"17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b"} Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.318692 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6vhb" event={"ID":"bf5d562a-7404-4053-85dc-05429d82026c","Type":"ContainerDied","Data":"2597a7451d0eb8b4ec21b5f33108bfcae1334f9e6c1a4e08b9d404e7d38a9238"} Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.318734 4712 scope.go:117] "RemoveContainer" containerID="17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.335316 4712 scope.go:117] "RemoveContainer" containerID="4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.351411 4712 scope.go:117] "RemoveContainer" containerID="f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.364943 4712 scope.go:117] "RemoveContainer" containerID="17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b" Jan 30 17:08:45 crc kubenswrapper[4712]: E0130 17:08:45.365354 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b\": container with ID starting with 17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b not found: ID does not exist" containerID="17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.365400 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b"} err="failed to get container status \"17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b\": rpc error: code = NotFound desc = could not find container \"17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b\": container with ID starting with 17a5fb8aa65f861d9e9210536f64dd9cbc7a5f7ea335c024d7796f44c64afd5b not found: ID does not exist" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.365420 4712 scope.go:117] "RemoveContainer" containerID="4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875" Jan 30 17:08:45 crc kubenswrapper[4712]: E0130 17:08:45.365741 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875\": container with ID starting with 4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875 not found: ID does not exist" containerID="4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.365764 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875"} err="failed to get container status \"4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875\": rpc error: code = NotFound desc = could not find container \"4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875\": container with ID starting with 4f11b047c0c37780927cec66620cda06dd465ac0bf32144240ce6ea5929a4875 not found: ID does not exist" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.365776 4712 scope.go:117] "RemoveContainer" containerID="f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7" Jan 30 17:08:45 crc kubenswrapper[4712]: E0130 17:08:45.366016 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7\": container with ID starting with f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7 not found: ID does not exist" containerID="f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.366042 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7"} err="failed to get container status \"f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7\": rpc error: code = NotFound desc = could not find container \"f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7\": container with ID starting with f8d9e2a50be1c90147e5f7f33c86ab87c0b0932de13bbd2be921f5010fc91ce7 not found: ID does not exist" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.557176 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bf5d562a-7404-4053-85dc-05429d82026c" (UID: "bf5d562a-7404-4053-85dc-05429d82026c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.615548 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bf5d562a-7404-4053-85dc-05429d82026c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.644851 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6vhb"] Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.650436 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6vhb"] Jan 30 17:08:45 crc kubenswrapper[4712]: I0130 17:08:45.807000 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5d562a-7404-4053-85dc-05429d82026c" path="/var/lib/kubelet/pods/bf5d562a-7404-4053-85dc-05429d82026c/volumes" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.890828 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b5cxm"] Jan 30 17:08:48 crc kubenswrapper[4712]: E0130 17:08:48.891511 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="registry-server" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.891527 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="registry-server" Jan 30 17:08:48 crc kubenswrapper[4712]: E0130 17:08:48.891545 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="extract-utilities" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.891553 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="extract-utilities" Jan 30 17:08:48 crc kubenswrapper[4712]: E0130 17:08:48.891564 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="extract-content" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.891572 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="extract-content" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.891695 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5d562a-7404-4053-85dc-05429d82026c" containerName="registry-server" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.892453 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.898109 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-7bvm8" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.908640 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b5cxm"] Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.926892 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft"] Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.927535 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.930713 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.942594 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-b97c2"] Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.943251 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:48 crc kubenswrapper[4712]: I0130 17:08:48.979693 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft"] Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.052832 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqjb9\" (UniqueName: \"kubernetes.io/projected/e2f3dc74-f154-42cb-83fa-1aa631aac288-kube-api-access-hqjb9\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.053376 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-ovs-socket\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.053440 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.053469 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-nmstate-lock\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.053496 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-772wp\" (UniqueName: \"kubernetes.io/projected/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-kube-api-access-772wp\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.053530 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-dbus-socket\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.053558 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv96j\" (UniqueName: \"kubernetes.io/projected/bb0a5cdb-d0e2-446f-b242-d63cfa7fb783-kube-api-access-jv96j\") pod \"nmstate-metrics-54757c584b-b5cxm\" (UID: \"bb0a5cdb-d0e2-446f-b242-d63cfa7fb783\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.070808 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w"] Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.071648 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.074047 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.074501 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.074592 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-rz2lr" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.084665 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w"] Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154442 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154483 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-nmstate-lock\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154516 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v7q4\" (UniqueName: \"kubernetes.io/projected/6fff5133-d95e-4817-b21a-0163f1a96240-kube-api-access-2v7q4\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154536 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-772wp\" (UniqueName: \"kubernetes.io/projected/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-kube-api-access-772wp\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154589 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-nmstate-lock\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154597 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-dbus-socket\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154637 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jv96j\" (UniqueName: \"kubernetes.io/projected/bb0a5cdb-d0e2-446f-b242-d63cfa7fb783-kube-api-access-jv96j\") pod \"nmstate-metrics-54757c584b-b5cxm\" (UID: \"bb0a5cdb-d0e2-446f-b242-d63cfa7fb783\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" Jan 30 17:08:49 crc kubenswrapper[4712]: E0130 17:08:49.154658 4712 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 30 17:08:49 crc kubenswrapper[4712]: E0130 17:08:49.154770 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-tls-key-pair podName:32b6f6bb-fadc-43d5-9046-f2ee1a93d325 nodeName:}" failed. No retries permitted until 2026-01-30 17:08:49.654748062 +0000 UTC m=+866.561757581 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-wg6ft" (UID: "32b6f6bb-fadc-43d5-9046-f2ee1a93d325") : secret "openshift-nmstate-webhook" not found Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154667 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fff5133-d95e-4817-b21a-0163f1a96240-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154835 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6fff5133-d95e-4817-b21a-0163f1a96240-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154857 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hqjb9\" (UniqueName: \"kubernetes.io/projected/e2f3dc74-f154-42cb-83fa-1aa631aac288-kube-api-access-hqjb9\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154875 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-ovs-socket\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.154929 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-ovs-socket\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.155038 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/e2f3dc74-f154-42cb-83fa-1aa631aac288-dbus-socket\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.179941 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv96j\" (UniqueName: \"kubernetes.io/projected/bb0a5cdb-d0e2-446f-b242-d63cfa7fb783-kube-api-access-jv96j\") pod \"nmstate-metrics-54757c584b-b5cxm\" (UID: \"bb0a5cdb-d0e2-446f-b242-d63cfa7fb783\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.190530 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-772wp\" (UniqueName: \"kubernetes.io/projected/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-kube-api-access-772wp\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.193724 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hqjb9\" (UniqueName: \"kubernetes.io/projected/e2f3dc74-f154-42cb-83fa-1aa631aac288-kube-api-access-hqjb9\") pod \"nmstate-handler-b97c2\" (UID: \"e2f3dc74-f154-42cb-83fa-1aa631aac288\") " pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.210093 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.255866 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.256241 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v7q4\" (UniqueName: \"kubernetes.io/projected/6fff5133-d95e-4817-b21a-0163f1a96240-kube-api-access-2v7q4\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.256315 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fff5133-d95e-4817-b21a-0163f1a96240-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.256348 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6fff5133-d95e-4817-b21a-0163f1a96240-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: E0130 17:08:49.256698 4712 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 30 17:08:49 crc kubenswrapper[4712]: E0130 17:08:49.256756 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6fff5133-d95e-4817-b21a-0163f1a96240-plugin-serving-cert podName:6fff5133-d95e-4817-b21a-0163f1a96240 nodeName:}" failed. No retries permitted until 2026-01-30 17:08:49.756738744 +0000 UTC m=+866.663748213 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/6fff5133-d95e-4817-b21a-0163f1a96240-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-scx5w" (UID: "6fff5133-d95e-4817-b21a-0163f1a96240") : secret "plugin-serving-cert" not found Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.257548 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6fff5133-d95e-4817-b21a-0163f1a96240-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.297344 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v7q4\" (UniqueName: \"kubernetes.io/projected/6fff5133-d95e-4817-b21a-0163f1a96240-kube-api-access-2v7q4\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.353766 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b97c2" event={"ID":"e2f3dc74-f154-42cb-83fa-1aa631aac288","Type":"ContainerStarted","Data":"c35b908c44be07fdefb599c4ea07745c73bb6a9796f993bdd83c96ccebc2cf05"} Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.380427 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-69cbc76644-s6m92"] Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.381189 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.393046 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69cbc76644-s6m92"] Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461233 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-service-ca\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461274 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-oauth-config\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461330 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg84v\" (UniqueName: \"kubernetes.io/projected/21b48c74-811e-46ec-a7f4-dbc7702008bf-kube-api-access-lg84v\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461356 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-oauth-serving-cert\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461379 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-serving-cert\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461400 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-config\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.461416 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-trusted-ca-bundle\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.533258 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b5cxm"] Jan 30 17:08:49 crc kubenswrapper[4712]: W0130 17:08:49.538870 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbb0a5cdb_d0e2_446f_b242_d63cfa7fb783.slice/crio-5f3fa66582489d799e224cbca897b754612d908ae6fb10f7a7599a4dce0808b8 WatchSource:0}: Error finding container 5f3fa66582489d799e224cbca897b754612d908ae6fb10f7a7599a4dce0808b8: Status 404 returned error can't find the container with id 5f3fa66582489d799e224cbca897b754612d908ae6fb10f7a7599a4dce0808b8 Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562462 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-service-ca\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562501 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-oauth-config\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562550 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lg84v\" (UniqueName: \"kubernetes.io/projected/21b48c74-811e-46ec-a7f4-dbc7702008bf-kube-api-access-lg84v\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562571 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-oauth-serving-cert\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562599 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-serving-cert\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562622 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-config\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.562638 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-trusted-ca-bundle\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.563732 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-service-ca\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.564237 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-trusted-ca-bundle\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.564278 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-config\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.564826 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/21b48c74-811e-46ec-a7f4-dbc7702008bf-oauth-serving-cert\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.570446 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-serving-cert\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.573365 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/21b48c74-811e-46ec-a7f4-dbc7702008bf-console-oauth-config\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.579692 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lg84v\" (UniqueName: \"kubernetes.io/projected/21b48c74-811e-46ec-a7f4-dbc7702008bf-kube-api-access-lg84v\") pod \"console-69cbc76644-s6m92\" (UID: \"21b48c74-811e-46ec-a7f4-dbc7702008bf\") " pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.663724 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.667409 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/32b6f6bb-fadc-43d5-9046-f2ee1a93d325-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-wg6ft\" (UID: \"32b6f6bb-fadc-43d5-9046-f2ee1a93d325\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.721699 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.765302 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fff5133-d95e-4817-b21a-0163f1a96240-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.768942 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/6fff5133-d95e-4817-b21a-0163f1a96240-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-scx5w\" (UID: \"6fff5133-d95e-4817-b21a-0163f1a96240\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.842071 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:49 crc kubenswrapper[4712]: I0130 17:08:49.986237 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.161092 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-69cbc76644-s6m92"] Jan 30 17:08:50 crc kubenswrapper[4712]: W0130 17:08:50.166604 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21b48c74_811e_46ec_a7f4_dbc7702008bf.slice/crio-b8735b285dfeac7a816872af05b4b12720e046555c56d2ca669af7780d885803 WatchSource:0}: Error finding container b8735b285dfeac7a816872af05b4b12720e046555c56d2ca669af7780d885803: Status 404 returned error can't find the container with id b8735b285dfeac7a816872af05b4b12720e046555c56d2ca669af7780d885803 Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.190066 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w"] Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.288158 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft"] Jan 30 17:08:50 crc kubenswrapper[4712]: W0130 17:08:50.305737 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32b6f6bb_fadc_43d5_9046_f2ee1a93d325.slice/crio-24b8ddc5b912abfe1cdd927ec7816a65bea28b542861ec57b1bf0f8930db3f0e WatchSource:0}: Error finding container 24b8ddc5b912abfe1cdd927ec7816a65bea28b542861ec57b1bf0f8930db3f0e: Status 404 returned error can't find the container with id 24b8ddc5b912abfe1cdd927ec7816a65bea28b542861ec57b1bf0f8930db3f0e Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.361120 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" event={"ID":"bb0a5cdb-d0e2-446f-b242-d63cfa7fb783","Type":"ContainerStarted","Data":"5f3fa66582489d799e224cbca897b754612d908ae6fb10f7a7599a4dce0808b8"} Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.362481 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" event={"ID":"6fff5133-d95e-4817-b21a-0163f1a96240","Type":"ContainerStarted","Data":"a5dd9110bff5b436c2e482b9749d466fab117f7ee9c84214a50789d992cefb8d"} Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.364059 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" event={"ID":"32b6f6bb-fadc-43d5-9046-f2ee1a93d325","Type":"ContainerStarted","Data":"24b8ddc5b912abfe1cdd927ec7816a65bea28b542861ec57b1bf0f8930db3f0e"} Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.365698 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69cbc76644-s6m92" event={"ID":"21b48c74-811e-46ec-a7f4-dbc7702008bf","Type":"ContainerStarted","Data":"f826e2876310a212da1eca7d48123268c39a32d361bd7ab8487b65c9feaf6e49"} Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.365726 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-69cbc76644-s6m92" event={"ID":"21b48c74-811e-46ec-a7f4-dbc7702008bf","Type":"ContainerStarted","Data":"b8735b285dfeac7a816872af05b4b12720e046555c56d2ca669af7780d885803"} Jan 30 17:08:50 crc kubenswrapper[4712]: I0130 17:08:50.386538 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-69cbc76644-s6m92" podStartSLOduration=1.3865188960000001 podStartE2EDuration="1.386518896s" podCreationTimestamp="2026-01-30 17:08:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:08:50.383457308 +0000 UTC m=+867.290466777" watchObservedRunningTime="2026-01-30 17:08:50.386518896 +0000 UTC m=+867.293528385" Jan 30 17:08:52 crc kubenswrapper[4712]: I0130 17:08:52.410297 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" event={"ID":"bb0a5cdb-d0e2-446f-b242-d63cfa7fb783","Type":"ContainerStarted","Data":"c75ad4717ef398f68fc511a2b2f9db48c0e78c6c49f9f891845c3ddce8dfd6fb"} Jan 30 17:08:52 crc kubenswrapper[4712]: I0130 17:08:52.414451 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" event={"ID":"32b6f6bb-fadc-43d5-9046-f2ee1a93d325","Type":"ContainerStarted","Data":"c1208195111c14b0cb088ebc297e45847fc2606ab0f40fce1a4f1f5155804fc2"} Jan 30 17:08:52 crc kubenswrapper[4712]: I0130 17:08:52.415521 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:08:52 crc kubenswrapper[4712]: I0130 17:08:52.434151 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" podStartSLOduration=2.56247327 podStartE2EDuration="4.434133762s" podCreationTimestamp="2026-01-30 17:08:48 +0000 UTC" firstStartedPulling="2026-01-30 17:08:50.310273557 +0000 UTC m=+867.217283026" lastFinishedPulling="2026-01-30 17:08:52.181934059 +0000 UTC m=+869.088943518" observedRunningTime="2026-01-30 17:08:52.432000257 +0000 UTC m=+869.339009746" watchObservedRunningTime="2026-01-30 17:08:52.434133762 +0000 UTC m=+869.341143231" Jan 30 17:08:53 crc kubenswrapper[4712]: I0130 17:08:53.421179 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" event={"ID":"6fff5133-d95e-4817-b21a-0163f1a96240","Type":"ContainerStarted","Data":"a336e1accbf3077a54b29e9b572c15556a61e26c52a985d255f6691b07cfe866"} Jan 30 17:08:53 crc kubenswrapper[4712]: I0130 17:08:53.425889 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-b97c2" event={"ID":"e2f3dc74-f154-42cb-83fa-1aa631aac288","Type":"ContainerStarted","Data":"c46dc2accdc3d155020b3693c64470f1d0d8bdfe673c1f3402905b77965e4158"} Jan 30 17:08:53 crc kubenswrapper[4712]: I0130 17:08:53.426225 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:53 crc kubenswrapper[4712]: I0130 17:08:53.494214 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-scx5w" podStartSLOduration=1.5339243059999998 podStartE2EDuration="4.494194272s" podCreationTimestamp="2026-01-30 17:08:49 +0000 UTC" firstStartedPulling="2026-01-30 17:08:50.267334643 +0000 UTC m=+867.174344112" lastFinishedPulling="2026-01-30 17:08:53.227604609 +0000 UTC m=+870.134614078" observedRunningTime="2026-01-30 17:08:53.438209943 +0000 UTC m=+870.345219422" watchObservedRunningTime="2026-01-30 17:08:53.494194272 +0000 UTC m=+870.401203741" Jan 30 17:08:53 crc kubenswrapper[4712]: I0130 17:08:53.826458 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-b97c2" podStartSLOduration=2.987254609 podStartE2EDuration="5.826438683s" podCreationTimestamp="2026-01-30 17:08:48 +0000 UTC" firstStartedPulling="2026-01-30 17:08:49.342647412 +0000 UTC m=+866.249656881" lastFinishedPulling="2026-01-30 17:08:52.181831476 +0000 UTC m=+869.088840955" observedRunningTime="2026-01-30 17:08:53.498139753 +0000 UTC m=+870.405149222" watchObservedRunningTime="2026-01-30 17:08:53.826438683 +0000 UTC m=+870.733448152" Jan 30 17:08:55 crc kubenswrapper[4712]: I0130 17:08:55.441472 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" event={"ID":"bb0a5cdb-d0e2-446f-b242-d63cfa7fb783","Type":"ContainerStarted","Data":"9db024e2af199ee6c0d166a347edcaa2315156cb63b16c02ad9e688ed5f9c060"} Jan 30 17:08:55 crc kubenswrapper[4712]: I0130 17:08:55.467196 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b5cxm" podStartSLOduration=2.265036413 podStartE2EDuration="7.467167069s" podCreationTimestamp="2026-01-30 17:08:48 +0000 UTC" firstStartedPulling="2026-01-30 17:08:49.541005112 +0000 UTC m=+866.448014581" lastFinishedPulling="2026-01-30 17:08:54.743135768 +0000 UTC m=+871.650145237" observedRunningTime="2026-01-30 17:08:55.463134746 +0000 UTC m=+872.370144205" watchObservedRunningTime="2026-01-30 17:08:55.467167069 +0000 UTC m=+872.374176558" Jan 30 17:08:59 crc kubenswrapper[4712]: I0130 17:08:59.278763 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-b97c2" Jan 30 17:08:59 crc kubenswrapper[4712]: I0130 17:08:59.722262 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:59 crc kubenswrapper[4712]: I0130 17:08:59.723006 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:08:59 crc kubenswrapper[4712]: I0130 17:08:59.727242 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:09:00 crc kubenswrapper[4712]: I0130 17:09:00.470841 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-69cbc76644-s6m92" Jan 30 17:09:00 crc kubenswrapper[4712]: I0130 17:09:00.521615 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jx2s9"] Jan 30 17:09:09 crc kubenswrapper[4712]: I0130 17:09:09.848310 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" Jan 30 17:09:25 crc kubenswrapper[4712]: I0130 17:09:25.577085 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-jx2s9" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerName="console" containerID="cri-o://514051fc967f6510aab225a39620ee09075374976dab5efe5c13ecdd3cd0bef3" gracePeriod=15 Jan 30 17:09:26 crc kubenswrapper[4712]: I0130 17:09:26.624687 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jx2s9_43a0a350-8151-4bcd-8d1e-1c534e291152/console/0.log" Jan 30 17:09:26 crc kubenswrapper[4712]: I0130 17:09:26.625130 4712 generic.go:334] "Generic (PLEG): container finished" podID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerID="514051fc967f6510aab225a39620ee09075374976dab5efe5c13ecdd3cd0bef3" exitCode=2 Jan 30 17:09:26 crc kubenswrapper[4712]: I0130 17:09:26.625176 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jx2s9" event={"ID":"43a0a350-8151-4bcd-8d1e-1c534e291152","Type":"ContainerDied","Data":"514051fc967f6510aab225a39620ee09075374976dab5efe5c13ecdd3cd0bef3"} Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.532414 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jx2s9_43a0a350-8151-4bcd-8d1e-1c534e291152/console/0.log" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.532813 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.632305 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-jx2s9_43a0a350-8151-4bcd-8d1e-1c534e291152/console/0.log" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.632641 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-jx2s9" event={"ID":"43a0a350-8151-4bcd-8d1e-1c534e291152","Type":"ContainerDied","Data":"b5e4e87cf26098e3d641adf816bd952f97471b61ba83775597821928050a9200"} Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.632693 4712 scope.go:117] "RemoveContainer" containerID="514051fc967f6510aab225a39620ee09075374976dab5efe5c13ecdd3cd0bef3" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.632711 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-jx2s9" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.675906 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-console-config\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.675985 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-trusted-ca-bundle\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.676012 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkknm\" (UniqueName: \"kubernetes.io/projected/43a0a350-8151-4bcd-8d1e-1c534e291152-kube-api-access-hkknm\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.676076 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-service-ca\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.676106 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-oauth-serving-cert\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.676140 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-oauth-config\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.676175 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-serving-cert\") pod \"43a0a350-8151-4bcd-8d1e-1c534e291152\" (UID: \"43a0a350-8151-4bcd-8d1e-1c534e291152\") " Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.679339 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.679352 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-service-ca" (OuterVolumeSpecName: "service-ca") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.679582 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.679924 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-console-config" (OuterVolumeSpecName: "console-config") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.695067 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43a0a350-8151-4bcd-8d1e-1c534e291152-kube-api-access-hkknm" (OuterVolumeSpecName: "kube-api-access-hkknm") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "kube-api-access-hkknm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.695890 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.696229 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43a0a350-8151-4bcd-8d1e-1c534e291152" (UID: "43a0a350-8151-4bcd-8d1e-1c534e291152"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778075 4712 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778111 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkknm\" (UniqueName: \"kubernetes.io/projected/43a0a350-8151-4bcd-8d1e-1c534e291152-kube-api-access-hkknm\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778124 4712 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-service-ca\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778135 4712 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778146 4712 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778156 4712 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43a0a350-8151-4bcd-8d1e-1c534e291152-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.778166 4712 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43a0a350-8151-4bcd-8d1e-1c534e291152-console-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.952507 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-jx2s9"] Jan 30 17:09:27 crc kubenswrapper[4712]: I0130 17:09:27.957336 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-jx2s9"] Jan 30 17:09:29 crc kubenswrapper[4712]: I0130 17:09:29.811125 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" path="/var/lib/kubelet/pods/43a0a350-8151-4bcd-8d1e-1c534e291152/volumes" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.282496 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh"] Jan 30 17:09:30 crc kubenswrapper[4712]: E0130 17:09:30.282988 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerName="console" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.282999 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerName="console" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.283126 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="43a0a350-8151-4bcd-8d1e-1c534e291152" containerName="console" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.283892 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.286448 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.291123 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh"] Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.411384 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vpr2\" (UniqueName: \"kubernetes.io/projected/648c4614-a929-4395-b743-253cde42a583-kube-api-access-4vpr2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.411422 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.411470 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.513087 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.513168 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.513239 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vpr2\" (UniqueName: \"kubernetes.io/projected/648c4614-a929-4395-b743-253cde42a583-kube-api-access-4vpr2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.514034 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.514049 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.542985 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vpr2\" (UniqueName: \"kubernetes.io/projected/648c4614-a929-4395-b743-253cde42a583-kube-api-access-4vpr2\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.598790 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:30 crc kubenswrapper[4712]: I0130 17:09:30.999053 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh"] Jan 30 17:09:31 crc kubenswrapper[4712]: I0130 17:09:31.661119 4712 generic.go:334] "Generic (PLEG): container finished" podID="648c4614-a929-4395-b743-253cde42a583" containerID="a883f9b673e3e58820d7c302a5c711260ef502da935e9488ac63ade274adb7f4" exitCode=0 Jan 30 17:09:31 crc kubenswrapper[4712]: I0130 17:09:31.661168 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" event={"ID":"648c4614-a929-4395-b743-253cde42a583","Type":"ContainerDied","Data":"a883f9b673e3e58820d7c302a5c711260ef502da935e9488ac63ade274adb7f4"} Jan 30 17:09:31 crc kubenswrapper[4712]: I0130 17:09:31.661200 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" event={"ID":"648c4614-a929-4395-b743-253cde42a583","Type":"ContainerStarted","Data":"c3b9f9234863ced16105eb6897417d55fa5f9512bd0f016c00a46dd609316b04"} Jan 30 17:09:33 crc kubenswrapper[4712]: I0130 17:09:33.678041 4712 generic.go:334] "Generic (PLEG): container finished" podID="648c4614-a929-4395-b743-253cde42a583" containerID="fe9f904f46eb82139a80bf6460327f70dd73d8ad1f18157a234b16997ac74d1d" exitCode=0 Jan 30 17:09:33 crc kubenswrapper[4712]: I0130 17:09:33.678086 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" event={"ID":"648c4614-a929-4395-b743-253cde42a583","Type":"ContainerDied","Data":"fe9f904f46eb82139a80bf6460327f70dd73d8ad1f18157a234b16997ac74d1d"} Jan 30 17:09:34 crc kubenswrapper[4712]: I0130 17:09:34.688844 4712 generic.go:334] "Generic (PLEG): container finished" podID="648c4614-a929-4395-b743-253cde42a583" containerID="7186c9d452d4803b6568c5cc810ebbb3f1bb7d20c57a17251fe46d7c7f967d0c" exitCode=0 Jan 30 17:09:34 crc kubenswrapper[4712]: I0130 17:09:34.689165 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" event={"ID":"648c4614-a929-4395-b743-253cde42a583","Type":"ContainerDied","Data":"7186c9d452d4803b6568c5cc810ebbb3f1bb7d20c57a17251fe46d7c7f967d0c"} Jan 30 17:09:35 crc kubenswrapper[4712]: I0130 17:09:35.880704 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:35 crc kubenswrapper[4712]: I0130 17:09:35.987917 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-bundle\") pod \"648c4614-a929-4395-b743-253cde42a583\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " Jan 30 17:09:35 crc kubenswrapper[4712]: I0130 17:09:35.988010 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-util\") pod \"648c4614-a929-4395-b743-253cde42a583\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " Jan 30 17:09:35 crc kubenswrapper[4712]: I0130 17:09:35.988053 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vpr2\" (UniqueName: \"kubernetes.io/projected/648c4614-a929-4395-b743-253cde42a583-kube-api-access-4vpr2\") pod \"648c4614-a929-4395-b743-253cde42a583\" (UID: \"648c4614-a929-4395-b743-253cde42a583\") " Jan 30 17:09:35 crc kubenswrapper[4712]: I0130 17:09:35.989212 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-bundle" (OuterVolumeSpecName: "bundle") pod "648c4614-a929-4395-b743-253cde42a583" (UID: "648c4614-a929-4395-b743-253cde42a583"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:09:35 crc kubenswrapper[4712]: I0130 17:09:35.996684 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/648c4614-a929-4395-b743-253cde42a583-kube-api-access-4vpr2" (OuterVolumeSpecName: "kube-api-access-4vpr2") pod "648c4614-a929-4395-b743-253cde42a583" (UID: "648c4614-a929-4395-b743-253cde42a583"). InnerVolumeSpecName "kube-api-access-4vpr2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.006289 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-util" (OuterVolumeSpecName: "util") pod "648c4614-a929-4395-b743-253cde42a583" (UID: "648c4614-a929-4395-b743-253cde42a583"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.090150 4712 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.090858 4712 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/648c4614-a929-4395-b743-253cde42a583-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.090873 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vpr2\" (UniqueName: \"kubernetes.io/projected/648c4614-a929-4395-b743-253cde42a583-kube-api-access-4vpr2\") on node \"crc\" DevicePath \"\"" Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.704186 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" event={"ID":"648c4614-a929-4395-b743-253cde42a583","Type":"ContainerDied","Data":"c3b9f9234863ced16105eb6897417d55fa5f9512bd0f016c00a46dd609316b04"} Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.704235 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b9f9234863ced16105eb6897417d55fa5f9512bd0f016c00a46dd609316b04" Jan 30 17:09:36 crc kubenswrapper[4712]: I0130 17:09:36.704286 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.639073 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-d574845cc-9l79n"] Jan 30 17:09:45 crc kubenswrapper[4712]: E0130 17:09:45.639766 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="extract" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.639778 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="extract" Jan 30 17:09:45 crc kubenswrapper[4712]: E0130 17:09:45.639795 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="pull" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.639819 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="pull" Jan 30 17:09:45 crc kubenswrapper[4712]: E0130 17:09:45.639828 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="util" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.639833 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="util" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.639927 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="648c4614-a929-4395-b743-253cde42a583" containerName="extract" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.640301 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.647230 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.647562 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-skpxv" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.647690 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.655081 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.656200 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.677383 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d574845cc-9l79n"] Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.812909 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85gb2\" (UniqueName: \"kubernetes.io/projected/5ad57c84-b9da-4613-92e6-0bfe23a14d69-kube-api-access-85gb2\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.812966 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ad57c84-b9da-4613-92e6-0bfe23a14d69-webhook-cert\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.812993 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ad57c84-b9da-4613-92e6-0bfe23a14d69-apiservice-cert\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.914107 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85gb2\" (UniqueName: \"kubernetes.io/projected/5ad57c84-b9da-4613-92e6-0bfe23a14d69-kube-api-access-85gb2\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.914172 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ad57c84-b9da-4613-92e6-0bfe23a14d69-webhook-cert\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.914196 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ad57c84-b9da-4613-92e6-0bfe23a14d69-apiservice-cert\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.919912 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5ad57c84-b9da-4613-92e6-0bfe23a14d69-webhook-cert\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.927430 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5ad57c84-b9da-4613-92e6-0bfe23a14d69-apiservice-cert\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.939272 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85gb2\" (UniqueName: \"kubernetes.io/projected/5ad57c84-b9da-4613-92e6-0bfe23a14d69-kube-api-access-85gb2\") pod \"metallb-operator-controller-manager-d574845cc-9l79n\" (UID: \"5ad57c84-b9da-4613-92e6-0bfe23a14d69\") " pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:45 crc kubenswrapper[4712]: I0130 17:09:45.956441 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.000461 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54"] Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.001260 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.011512 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.027146 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.043602 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-w5x8j" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.058568 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54"] Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.126668 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnz2d\" (UniqueName: \"kubernetes.io/projected/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-kube-api-access-hnz2d\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.126810 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-apiservice-cert\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.126852 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-webhook-cert\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.227474 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnz2d\" (UniqueName: \"kubernetes.io/projected/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-kube-api-access-hnz2d\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.227552 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-apiservice-cert\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.227582 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-webhook-cert\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.236717 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-webhook-cert\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.244572 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-apiservice-cert\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.280565 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnz2d\" (UniqueName: \"kubernetes.io/projected/5fe7be15-f524-46c1-ba58-e2d8ccd001c0-kube-api-access-hnz2d\") pod \"metallb-operator-webhook-server-58dccfbb96-pxb54\" (UID: \"5fe7be15-f524-46c1-ba58-e2d8ccd001c0\") " pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.347086 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.392281 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-d574845cc-9l79n"] Jan 30 17:09:46 crc kubenswrapper[4712]: W0130 17:09:46.428469 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5ad57c84_b9da_4613_92e6_0bfe23a14d69.slice/crio-65f0a8b309bacab6e50f3f769633d0368986f385eceb11a94f57488fc1282103 WatchSource:0}: Error finding container 65f0a8b309bacab6e50f3f769633d0368986f385eceb11a94f57488fc1282103: Status 404 returned error can't find the container with id 65f0a8b309bacab6e50f3f769633d0368986f385eceb11a94f57488fc1282103 Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.673111 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54"] Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.761023 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" event={"ID":"5fe7be15-f524-46c1-ba58-e2d8ccd001c0","Type":"ContainerStarted","Data":"d18f0301ec3bef2731a470444a3fd6ed218ac57eda9e3aebfd1dbc4c7b33f49f"} Jan 30 17:09:46 crc kubenswrapper[4712]: I0130 17:09:46.762914 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" event={"ID":"5ad57c84-b9da-4613-92e6-0bfe23a14d69","Type":"ContainerStarted","Data":"65f0a8b309bacab6e50f3f769633d0368986f385eceb11a94f57488fc1282103"} Jan 30 17:09:50 crc kubenswrapper[4712]: I0130 17:09:50.790164 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" event={"ID":"5ad57c84-b9da-4613-92e6-0bfe23a14d69","Type":"ContainerStarted","Data":"1de9dfde392e97ff66538fcf3c4426f8b9fe11eacae202adfec491606691230f"} Jan 30 17:09:50 crc kubenswrapper[4712]: I0130 17:09:50.790740 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:09:50 crc kubenswrapper[4712]: I0130 17:09:50.814495 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" podStartSLOduration=2.383959306 podStartE2EDuration="5.814473733s" podCreationTimestamp="2026-01-30 17:09:45 +0000 UTC" firstStartedPulling="2026-01-30 17:09:46.434301519 +0000 UTC m=+923.341310988" lastFinishedPulling="2026-01-30 17:09:49.864815946 +0000 UTC m=+926.771825415" observedRunningTime="2026-01-30 17:09:50.812090706 +0000 UTC m=+927.719100185" watchObservedRunningTime="2026-01-30 17:09:50.814473733 +0000 UTC m=+927.721483212" Jan 30 17:09:54 crc kubenswrapper[4712]: I0130 17:09:54.814405 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" event={"ID":"5fe7be15-f524-46c1-ba58-e2d8ccd001c0","Type":"ContainerStarted","Data":"d2f648970ab6b5218373eaa4eeaf1c04e0ee91c91fa9f9c7540682dfaaaa8a13"} Jan 30 17:09:54 crc kubenswrapper[4712]: I0130 17:09:54.815093 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:09:54 crc kubenswrapper[4712]: I0130 17:09:54.835883 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podStartSLOduration=2.7084996329999997 podStartE2EDuration="9.835865242s" podCreationTimestamp="2026-01-30 17:09:45 +0000 UTC" firstStartedPulling="2026-01-30 17:09:46.689724972 +0000 UTC m=+923.596734441" lastFinishedPulling="2026-01-30 17:09:53.817090591 +0000 UTC m=+930.724100050" observedRunningTime="2026-01-30 17:09:54.831426885 +0000 UTC m=+931.738436374" watchObservedRunningTime="2026-01-30 17:09:54.835865242 +0000 UTC m=+931.742874711" Jan 30 17:10:06 crc kubenswrapper[4712]: I0130 17:10:06.352227 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.327188 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pw626"] Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.328628 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.340110 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw626"] Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.436281 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-utilities\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.436365 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-catalog-content\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.436393 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kmw6\" (UniqueName: \"kubernetes.io/projected/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-kube-api-access-2kmw6\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.537556 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-catalog-content\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.537610 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kmw6\" (UniqueName: \"kubernetes.io/projected/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-kube-api-access-2kmw6\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.537658 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-utilities\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.538119 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-catalog-content\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.538143 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-utilities\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.559486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kmw6\" (UniqueName: \"kubernetes.io/projected/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-kube-api-access-2kmw6\") pod \"community-operators-pw626\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.654767 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:09 crc kubenswrapper[4712]: I0130 17:10:09.976113 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pw626"] Jan 30 17:10:10 crc kubenswrapper[4712]: I0130 17:10:10.904830 4712 generic.go:334] "Generic (PLEG): container finished" podID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerID="d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894" exitCode=0 Jan 30 17:10:10 crc kubenswrapper[4712]: I0130 17:10:10.904882 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw626" event={"ID":"c7488365-0d1b-48d7-94a2-ab2610cf5bbf","Type":"ContainerDied","Data":"d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894"} Jan 30 17:10:10 crc kubenswrapper[4712]: I0130 17:10:10.905107 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw626" event={"ID":"c7488365-0d1b-48d7-94a2-ab2610cf5bbf","Type":"ContainerStarted","Data":"204e26bbfc68e4f6f1108b326e3cb96999eff7fcf1393c60be11793b7d3dd569"} Jan 30 17:10:12 crc kubenswrapper[4712]: I0130 17:10:12.917842 4712 generic.go:334] "Generic (PLEG): container finished" podID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerID="7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd" exitCode=0 Jan 30 17:10:12 crc kubenswrapper[4712]: I0130 17:10:12.917928 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw626" event={"ID":"c7488365-0d1b-48d7-94a2-ab2610cf5bbf","Type":"ContainerDied","Data":"7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd"} Jan 30 17:10:13 crc kubenswrapper[4712]: I0130 17:10:13.926934 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw626" event={"ID":"c7488365-0d1b-48d7-94a2-ab2610cf5bbf","Type":"ContainerStarted","Data":"1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5"} Jan 30 17:10:13 crc kubenswrapper[4712]: I0130 17:10:13.943963 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pw626" podStartSLOduration=2.460183098 podStartE2EDuration="4.943943048s" podCreationTimestamp="2026-01-30 17:10:09 +0000 UTC" firstStartedPulling="2026-01-30 17:10:10.906140378 +0000 UTC m=+947.813149837" lastFinishedPulling="2026-01-30 17:10:13.389900318 +0000 UTC m=+950.296909787" observedRunningTime="2026-01-30 17:10:13.941273084 +0000 UTC m=+950.848282553" watchObservedRunningTime="2026-01-30 17:10:13.943943048 +0000 UTC m=+950.850952517" Jan 30 17:10:19 crc kubenswrapper[4712]: I0130 17:10:19.655685 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:19 crc kubenswrapper[4712]: I0130 17:10:19.656121 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:19 crc kubenswrapper[4712]: I0130 17:10:19.693448 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:20 crc kubenswrapper[4712]: I0130 17:10:20.006600 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:20 crc kubenswrapper[4712]: I0130 17:10:20.055300 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw626"] Jan 30 17:10:21 crc kubenswrapper[4712]: I0130 17:10:21.969255 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pw626" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="registry-server" containerID="cri-o://1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5" gracePeriod=2 Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.478422 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.595968 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-catalog-content\") pod \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.596029 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kmw6\" (UniqueName: \"kubernetes.io/projected/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-kube-api-access-2kmw6\") pod \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.596126 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-utilities\") pod \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\" (UID: \"c7488365-0d1b-48d7-94a2-ab2610cf5bbf\") " Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.596883 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-utilities" (OuterVolumeSpecName: "utilities") pod "c7488365-0d1b-48d7-94a2-ab2610cf5bbf" (UID: "c7488365-0d1b-48d7-94a2-ab2610cf5bbf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.601578 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-kube-api-access-2kmw6" (OuterVolumeSpecName: "kube-api-access-2kmw6") pod "c7488365-0d1b-48d7-94a2-ab2610cf5bbf" (UID: "c7488365-0d1b-48d7-94a2-ab2610cf5bbf"). InnerVolumeSpecName "kube-api-access-2kmw6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.646733 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7488365-0d1b-48d7-94a2-ab2610cf5bbf" (UID: "c7488365-0d1b-48d7-94a2-ab2610cf5bbf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.697285 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kmw6\" (UniqueName: \"kubernetes.io/projected/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-kube-api-access-2kmw6\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.697320 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.697331 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7488365-0d1b-48d7-94a2-ab2610cf5bbf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.976596 4712 generic.go:334] "Generic (PLEG): container finished" podID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerID="1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5" exitCode=0 Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.976636 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw626" event={"ID":"c7488365-0d1b-48d7-94a2-ab2610cf5bbf","Type":"ContainerDied","Data":"1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5"} Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.976651 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pw626" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.976675 4712 scope.go:117] "RemoveContainer" containerID="1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5" Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.976661 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pw626" event={"ID":"c7488365-0d1b-48d7-94a2-ab2610cf5bbf","Type":"ContainerDied","Data":"204e26bbfc68e4f6f1108b326e3cb96999eff7fcf1393c60be11793b7d3dd569"} Jan 30 17:10:22 crc kubenswrapper[4712]: I0130 17:10:22.996897 4712 scope.go:117] "RemoveContainer" containerID="7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.008251 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pw626"] Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.011157 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pw626"] Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.026381 4712 scope.go:117] "RemoveContainer" containerID="d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.041593 4712 scope.go:117] "RemoveContainer" containerID="1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5" Jan 30 17:10:23 crc kubenswrapper[4712]: E0130 17:10:23.042032 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5\": container with ID starting with 1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5 not found: ID does not exist" containerID="1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.042063 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5"} err="failed to get container status \"1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5\": rpc error: code = NotFound desc = could not find container \"1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5\": container with ID starting with 1182d85ab175df9c16c52e1220ef992e543ea2ab540470ed0629709d02e581d5 not found: ID does not exist" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.042084 4712 scope.go:117] "RemoveContainer" containerID="7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd" Jan 30 17:10:23 crc kubenswrapper[4712]: E0130 17:10:23.042376 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd\": container with ID starting with 7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd not found: ID does not exist" containerID="7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.042410 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd"} err="failed to get container status \"7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd\": rpc error: code = NotFound desc = could not find container \"7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd\": container with ID starting with 7229504682b3682fb86a02499a9a7e365319f75e189215141d8e5f722211d5cd not found: ID does not exist" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.042434 4712 scope.go:117] "RemoveContainer" containerID="d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894" Jan 30 17:10:23 crc kubenswrapper[4712]: E0130 17:10:23.042731 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894\": container with ID starting with d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894 not found: ID does not exist" containerID="d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.042769 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894"} err="failed to get container status \"d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894\": rpc error: code = NotFound desc = could not find container \"d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894\": container with ID starting with d3b999f6ea05640d8d44ad6d913707177b24670b13b608e340e441785f943894 not found: ID does not exist" Jan 30 17:10:23 crc kubenswrapper[4712]: I0130 17:10:23.808512 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" path="/var/lib/kubelet/pods/c7488365-0d1b-48d7-94a2-ab2610cf5bbf/volumes" Jan 30 17:10:25 crc kubenswrapper[4712]: I0130 17:10:25.961542 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.660258 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq"] Jan 30 17:10:26 crc kubenswrapper[4712]: E0130 17:10:26.660842 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="registry-server" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.660884 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="registry-server" Jan 30 17:10:26 crc kubenswrapper[4712]: E0130 17:10:26.660901 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="extract-content" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.660909 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="extract-content" Jan 30 17:10:26 crc kubenswrapper[4712]: E0130 17:10:26.660926 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="extract-utilities" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.660935 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="extract-utilities" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.661053 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7488365-0d1b-48d7-94a2-ab2610cf5bbf" containerName="registry-server" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.661423 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.665643 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.668678 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-lt9vj" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.677043 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-j9bpz"] Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.679093 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.683486 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.683881 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.690849 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq"] Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.783022 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-gmjr9"] Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.783952 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gmjr9" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.788158 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-stfq8" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.788236 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.788355 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.788387 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.800219 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-kr8vp"] Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.801092 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.802640 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.812545 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-kr8vp"] Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847664 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-startup\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847710 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnp5l\" (UniqueName: \"kubernetes.io/projected/055ca335-cbe6-4ef8-af90-fb2d995a3187-kube-api-access-bnp5l\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847738 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/055ca335-cbe6-4ef8-af90-fb2d995a3187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847763 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-metrics-certs\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847783 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-metrics\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847891 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pmcr\" (UniqueName: \"kubernetes.io/projected/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-kube-api-access-8pmcr\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847921 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-conf\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847948 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-reloader\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.847978 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-sockets\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948725 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f5e77c2d-c85b-44c7-ae02-074b491daf83-metallb-excludel2\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948785 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-reloader\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948849 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lc5zl\" (UniqueName: \"kubernetes.io/projected/f5e77c2d-c85b-44c7-ae02-074b491daf83-kube-api-access-lc5zl\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948868 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-sockets\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948891 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/923ca268-753b-4b59-8c12-9517f5708f65-cert\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948914 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/923ca268-753b-4b59-8c12-9517f5708f65-metrics-certs\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948930 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-memberlist\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948957 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-startup\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948978 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bnp5l\" (UniqueName: \"kubernetes.io/projected/055ca335-cbe6-4ef8-af90-fb2d995a3187-kube-api-access-bnp5l\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.948998 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jpkt\" (UniqueName: \"kubernetes.io/projected/923ca268-753b-4b59-8c12-9517f5708f65-kube-api-access-8jpkt\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949015 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/055ca335-cbe6-4ef8-af90-fb2d995a3187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949037 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-metrics-certs\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949056 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-metrics\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949071 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pmcr\" (UniqueName: \"kubernetes.io/projected/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-kube-api-access-8pmcr\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949092 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-metrics-certs\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949112 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-conf\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949366 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-sockets\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949379 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-reloader\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949506 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-conf\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: E0130 17:10:26.949610 4712 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Jan 30 17:10:26 crc kubenswrapper[4712]: E0130 17:10:26.949665 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/055ca335-cbe6-4ef8-af90-fb2d995a3187-cert podName:055ca335-cbe6-4ef8-af90-fb2d995a3187 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:27.449648363 +0000 UTC m=+964.356657832 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/055ca335-cbe6-4ef8-af90-fb2d995a3187-cert") pod "frr-k8s-webhook-server-7df86c4f6c-vkxrq" (UID: "055ca335-cbe6-4ef8-af90-fb2d995a3187") : secret "frr-k8s-webhook-server-cert" not found Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.949899 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-metrics\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.950106 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-frr-startup\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.963992 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-metrics-certs\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.967846 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnp5l\" (UniqueName: \"kubernetes.io/projected/055ca335-cbe6-4ef8-af90-fb2d995a3187-kube-api-access-bnp5l\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.972227 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pmcr\" (UniqueName: \"kubernetes.io/projected/7d1e2433-a99b-4b29-8f58-e21a7745d1d9-kube-api-access-8pmcr\") pod \"frr-k8s-j9bpz\" (UID: \"7d1e2433-a99b-4b29-8f58-e21a7745d1d9\") " pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:26 crc kubenswrapper[4712]: I0130 17:10:26.991930 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050183 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f5e77c2d-c85b-44c7-ae02-074b491daf83-metallb-excludel2\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050244 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lc5zl\" (UniqueName: \"kubernetes.io/projected/f5e77c2d-c85b-44c7-ae02-074b491daf83-kube-api-access-lc5zl\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050268 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/923ca268-753b-4b59-8c12-9517f5708f65-cert\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050292 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/923ca268-753b-4b59-8c12-9517f5708f65-metrics-certs\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050309 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-memberlist\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050340 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jpkt\" (UniqueName: \"kubernetes.io/projected/923ca268-753b-4b59-8c12-9517f5708f65-kube-api-access-8jpkt\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.050378 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-metrics-certs\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: E0130 17:10:27.050937 4712 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 30 17:10:27 crc kubenswrapper[4712]: E0130 17:10:27.051014 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-memberlist podName:f5e77c2d-c85b-44c7-ae02-074b491daf83 nodeName:}" failed. No retries permitted until 2026-01-30 17:10:27.550986208 +0000 UTC m=+964.457995677 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-memberlist") pod "speaker-gmjr9" (UID: "f5e77c2d-c85b-44c7-ae02-074b491daf83") : secret "metallb-memberlist" not found Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.052219 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/f5e77c2d-c85b-44c7-ae02-074b491daf83-metallb-excludel2\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.054336 4712 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.054883 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-metrics-certs\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.059967 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/923ca268-753b-4b59-8c12-9517f5708f65-metrics-certs\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.069449 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/923ca268-753b-4b59-8c12-9517f5708f65-cert\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.072329 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lc5zl\" (UniqueName: \"kubernetes.io/projected/f5e77c2d-c85b-44c7-ae02-074b491daf83-kube-api-access-lc5zl\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.073748 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jpkt\" (UniqueName: \"kubernetes.io/projected/923ca268-753b-4b59-8c12-9517f5708f65-kube-api-access-8jpkt\") pod \"controller-6968d8fdc4-kr8vp\" (UID: \"923ca268-753b-4b59-8c12-9517f5708f65\") " pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.112915 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.354828 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-kr8vp"] Jan 30 17:10:27 crc kubenswrapper[4712]: W0130 17:10:27.359919 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod923ca268_753b_4b59_8c12_9517f5708f65.slice/crio-059342a63a8d8769ba7c18ef3c05303139f772ccc31f219f514d2d27ef707334 WatchSource:0}: Error finding container 059342a63a8d8769ba7c18ef3c05303139f772ccc31f219f514d2d27ef707334: Status 404 returned error can't find the container with id 059342a63a8d8769ba7c18ef3c05303139f772ccc31f219f514d2d27ef707334 Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.456615 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/055ca335-cbe6-4ef8-af90-fb2d995a3187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.466276 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/055ca335-cbe6-4ef8-af90-fb2d995a3187-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-vkxrq\" (UID: \"055ca335-cbe6-4ef8-af90-fb2d995a3187\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.558533 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-memberlist\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.564576 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/f5e77c2d-c85b-44c7-ae02-074b491daf83-memberlist\") pod \"speaker-gmjr9\" (UID: \"f5e77c2d-c85b-44c7-ae02-074b491daf83\") " pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.579963 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.697002 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-gmjr9" Jan 30 17:10:27 crc kubenswrapper[4712]: I0130 17:10:27.990379 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq"] Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.009012 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gmjr9" event={"ID":"f5e77c2d-c85b-44c7-ae02-074b491daf83","Type":"ContainerStarted","Data":"e4f2d6a91651a64eefde71f273f23597b9a885ebba80a8da3dac3369bc9717a3"} Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.009059 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gmjr9" event={"ID":"f5e77c2d-c85b-44c7-ae02-074b491daf83","Type":"ContainerStarted","Data":"deeb546f1b97625b0d7183f42454b5f632fe13d0c596db6f2dc87e9f56ace0e8"} Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.014514 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"32f4e82f16a5d957b9076b6207d62e308154aeeb30eeb85bbccb71dc26b600b0"} Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.019540 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kr8vp" event={"ID":"923ca268-753b-4b59-8c12-9517f5708f65","Type":"ContainerStarted","Data":"9c36b4d6ba2c404cfbd0ca7302d61d299ae55f90f16b6626b63e137f1be27b24"} Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.019574 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kr8vp" event={"ID":"923ca268-753b-4b59-8c12-9517f5708f65","Type":"ContainerStarted","Data":"d3aaea57060c45da68699aefb444e757635a015149b4e78534430e291caba14c"} Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.019583 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-kr8vp" event={"ID":"923ca268-753b-4b59-8c12-9517f5708f65","Type":"ContainerStarted","Data":"059342a63a8d8769ba7c18ef3c05303139f772ccc31f219f514d2d27ef707334"} Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.020418 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:28 crc kubenswrapper[4712]: I0130 17:10:28.039614 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-kr8vp" podStartSLOduration=2.039593987 podStartE2EDuration="2.039593987s" podCreationTimestamp="2026-01-30 17:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:10:28.035473098 +0000 UTC m=+964.942482567" watchObservedRunningTime="2026-01-30 17:10:28.039593987 +0000 UTC m=+964.946603446" Jan 30 17:10:29 crc kubenswrapper[4712]: I0130 17:10:29.026849 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-gmjr9" event={"ID":"f5e77c2d-c85b-44c7-ae02-074b491daf83","Type":"ContainerStarted","Data":"06d304862a29ec1937eb1cc59d17a47f54ee0b157d508d2d93c1b2f8dd5b9bd6"} Jan 30 17:10:29 crc kubenswrapper[4712]: I0130 17:10:29.027156 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-gmjr9" Jan 30 17:10:29 crc kubenswrapper[4712]: I0130 17:10:29.029605 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" event={"ID":"055ca335-cbe6-4ef8-af90-fb2d995a3187","Type":"ContainerStarted","Data":"e881ae5aee73787ed3c566f4eeb7309c3ff53468a702dcbb3807ed65e294d0e8"} Jan 30 17:10:29 crc kubenswrapper[4712]: I0130 17:10:29.051664 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-gmjr9" podStartSLOduration=3.051646107 podStartE2EDuration="3.051646107s" podCreationTimestamp="2026-01-30 17:10:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:10:29.04925212 +0000 UTC m=+965.956261589" watchObservedRunningTime="2026-01-30 17:10:29.051646107 +0000 UTC m=+965.958655576" Jan 30 17:10:36 crc kubenswrapper[4712]: I0130 17:10:36.270883 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:10:36 crc kubenswrapper[4712]: I0130 17:10:36.271302 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.099455 4712 generic.go:334] "Generic (PLEG): container finished" podID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerID="35332f04836d6b157d491751f74fb8f8252583e0fc613e79f39eb19e6a308bba" exitCode=0 Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.099531 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerDied","Data":"35332f04836d6b157d491751f74fb8f8252583e0fc613e79f39eb19e6a308bba"} Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.101870 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" event={"ID":"055ca335-cbe6-4ef8-af90-fb2d995a3187","Type":"ContainerStarted","Data":"1790ed85ceaea6974b39b1c59486129cda644c782f22e7ebc43e4e7461fb9961"} Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.102103 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.118074 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-kr8vp" Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.181534 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" podStartSLOduration=3.135857701 podStartE2EDuration="11.181510557s" podCreationTimestamp="2026-01-30 17:10:26 +0000 UTC" firstStartedPulling="2026-01-30 17:10:28.007447298 +0000 UTC m=+964.914456767" lastFinishedPulling="2026-01-30 17:10:36.053100154 +0000 UTC m=+972.960109623" observedRunningTime="2026-01-30 17:10:37.164258525 +0000 UTC m=+974.071267994" watchObservedRunningTime="2026-01-30 17:10:37.181510557 +0000 UTC m=+974.088520036" Jan 30 17:10:37 crc kubenswrapper[4712]: I0130 17:10:37.700565 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-gmjr9" Jan 30 17:10:38 crc kubenswrapper[4712]: I0130 17:10:38.108090 4712 generic.go:334] "Generic (PLEG): container finished" podID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerID="d52e19ab042dd228bad58e1f8e9d9957ce1e14fcd2f8f31e34deaeb92c8a2686" exitCode=0 Jan 30 17:10:38 crc kubenswrapper[4712]: I0130 17:10:38.109283 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerDied","Data":"d52e19ab042dd228bad58e1f8e9d9957ce1e14fcd2f8f31e34deaeb92c8a2686"} Jan 30 17:10:39 crc kubenswrapper[4712]: I0130 17:10:39.115564 4712 generic.go:334] "Generic (PLEG): container finished" podID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerID="4d08a3dc8085d256181ac9400d8de70a39e6313134baf9ffdca5340ebe4039f2" exitCode=0 Jan 30 17:10:39 crc kubenswrapper[4712]: I0130 17:10:39.115608 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerDied","Data":"4d08a3dc8085d256181ac9400d8de70a39e6313134baf9ffdca5340ebe4039f2"} Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.126481 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"7f1f3fb9126e67dd599c33d10a9708a016c140fc8729ee2b40384da19c19ecf5"} Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.126892 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"6be33609f8e8ec7a5896d0a7defbd7680935427d1544f6bd7000f359641cb3c4"} Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.126913 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"b0a357bdb5618102c61d86540e5aa4d38e5f998ae01e8ba51e4b9415e0897e68"} Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.503030 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-28hp4"] Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.504320 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.508713 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.510047 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.510761 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-8f88s" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.525691 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-28hp4"] Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.587993 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkgcz\" (UniqueName: \"kubernetes.io/projected/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb-kube-api-access-bkgcz\") pod \"openstack-operator-index-28hp4\" (UID: \"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb\") " pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.688572 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkgcz\" (UniqueName: \"kubernetes.io/projected/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb-kube-api-access-bkgcz\") pod \"openstack-operator-index-28hp4\" (UID: \"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb\") " pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.721012 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkgcz\" (UniqueName: \"kubernetes.io/projected/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb-kube-api-access-bkgcz\") pod \"openstack-operator-index-28hp4\" (UID: \"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb\") " pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:40 crc kubenswrapper[4712]: I0130 17:10:40.819325 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.076038 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-28hp4"] Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.139035 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"dee12f683b1c55181704c20ab2ff63f37a631abd08860ea0757b6c17b30f659c"} Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.139534 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.139556 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"39067408d9de701bcca5cbe285140ff94fe3134d33e0fd4a3a975915843a9997"} Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.139594 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"8606ea842b038057a30409c219e9a20b2b60a32215b97ac2ba58d5e25ea900e5"} Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.142337 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28hp4" event={"ID":"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb","Type":"ContainerStarted","Data":"e2ec8cb1ae73c122206e27995f696181b38f1d51e969989a06838741b2233129"} Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.160421 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-j9bpz" podStartSLOduration=6.35530355 podStartE2EDuration="15.160405169s" podCreationTimestamp="2026-01-30 17:10:26 +0000 UTC" firstStartedPulling="2026-01-30 17:10:27.195775864 +0000 UTC m=+964.102785333" lastFinishedPulling="2026-01-30 17:10:36.000877483 +0000 UTC m=+972.907886952" observedRunningTime="2026-01-30 17:10:41.159075777 +0000 UTC m=+978.066085236" watchObservedRunningTime="2026-01-30 17:10:41.160405169 +0000 UTC m=+978.067414638" Jan 30 17:10:41 crc kubenswrapper[4712]: I0130 17:10:41.992788 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:42 crc kubenswrapper[4712]: I0130 17:10:42.029697 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:44 crc kubenswrapper[4712]: I0130 17:10:44.667567 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-28hp4"] Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.476590 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-x5k4p"] Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.478199 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.487229 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5k4p"] Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.589156 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mmp4\" (UniqueName: \"kubernetes.io/projected/8610a2e0-98ae-41e2-80a0-c66d693024a0-kube-api-access-6mmp4\") pod \"openstack-operator-index-x5k4p\" (UID: \"8610a2e0-98ae-41e2-80a0-c66d693024a0\") " pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.690500 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mmp4\" (UniqueName: \"kubernetes.io/projected/8610a2e0-98ae-41e2-80a0-c66d693024a0-kube-api-access-6mmp4\") pod \"openstack-operator-index-x5k4p\" (UID: \"8610a2e0-98ae-41e2-80a0-c66d693024a0\") " pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.708035 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mmp4\" (UniqueName: \"kubernetes.io/projected/8610a2e0-98ae-41e2-80a0-c66d693024a0-kube-api-access-6mmp4\") pod \"openstack-operator-index-x5k4p\" (UID: \"8610a2e0-98ae-41e2-80a0-c66d693024a0\") " pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.797576 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.885547 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pgkqf"] Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.888577 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.904253 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgkqf"] Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.998608 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-utilities\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.998666 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgclv\" (UniqueName: \"kubernetes.io/projected/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-kube-api-access-lgclv\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:45 crc kubenswrapper[4712]: I0130 17:10:45.998720 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-catalog-content\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.099488 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgclv\" (UniqueName: \"kubernetes.io/projected/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-kube-api-access-lgclv\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.099570 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-catalog-content\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.099647 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-utilities\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.100435 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-catalog-content\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.100493 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-utilities\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.118547 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgclv\" (UniqueName: \"kubernetes.io/projected/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-kube-api-access-lgclv\") pod \"redhat-marketplace-pgkqf\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.241743 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.251369 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-x5k4p"] Jan 30 17:10:46 crc kubenswrapper[4712]: I0130 17:10:46.469950 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgkqf"] Jan 30 17:10:46 crc kubenswrapper[4712]: W0130 17:10:46.475979 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a81ce4_838e_43ed_9ab5_fe801fc2cc33.slice/crio-71db57566f731d91a1b96162a341c6ee01588aa17a417df5c448736bd0157af1 WatchSource:0}: Error finding container 71db57566f731d91a1b96162a341c6ee01588aa17a417df5c448736bd0157af1: Status 404 returned error can't find the container with id 71db57566f731d91a1b96162a341c6ee01588aa17a417df5c448736bd0157af1 Jan 30 17:10:47 crc kubenswrapper[4712]: I0130 17:10:47.177617 4712 generic.go:334] "Generic (PLEG): container finished" podID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerID="e1b002720ec777eb81737621681f66e63056fb08fd9b68a64e7be7918b8c11a6" exitCode=0 Jan 30 17:10:47 crc kubenswrapper[4712]: I0130 17:10:47.177921 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgkqf" event={"ID":"49a81ce4-838e-43ed-9ab5-fe801fc2cc33","Type":"ContainerDied","Data":"e1b002720ec777eb81737621681f66e63056fb08fd9b68a64e7be7918b8c11a6"} Jan 30 17:10:47 crc kubenswrapper[4712]: I0130 17:10:47.177963 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgkqf" event={"ID":"49a81ce4-838e-43ed-9ab5-fe801fc2cc33","Type":"ContainerStarted","Data":"71db57566f731d91a1b96162a341c6ee01588aa17a417df5c448736bd0157af1"} Jan 30 17:10:47 crc kubenswrapper[4712]: I0130 17:10:47.180129 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5k4p" event={"ID":"8610a2e0-98ae-41e2-80a0-c66d693024a0","Type":"ContainerStarted","Data":"74ba0431e3f8f2bf4deac440ba3f9c5f6e47f7de40ee20b0927ca394925451c1"} Jan 30 17:10:47 crc kubenswrapper[4712]: I0130 17:10:47.584197 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.214932 4712 generic.go:334] "Generic (PLEG): container finished" podID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerID="42be3885624a3dc8515032495274d49022d303e98be1f9ffabf220836447ab23" exitCode=0 Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.215053 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgkqf" event={"ID":"49a81ce4-838e-43ed-9ab5-fe801fc2cc33","Type":"ContainerDied","Data":"42be3885624a3dc8515032495274d49022d303e98be1f9ffabf220836447ab23"} Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.217831 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28hp4" event={"ID":"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb","Type":"ContainerStarted","Data":"4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb"} Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.217883 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-28hp4" podUID="4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" containerName="registry-server" containerID="cri-o://4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb" gracePeriod=2 Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.219075 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5k4p" event={"ID":"8610a2e0-98ae-41e2-80a0-c66d693024a0","Type":"ContainerStarted","Data":"ccdfd7238be80e868d33acaacf3ac1488f312ac3c32c73ccd616c1e6060ec781"} Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.262915 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-28hp4" podStartSLOduration=1.779573817 podStartE2EDuration="11.262893618s" podCreationTimestamp="2026-01-30 17:10:40 +0000 UTC" firstStartedPulling="2026-01-30 17:10:41.086651454 +0000 UTC m=+977.993660923" lastFinishedPulling="2026-01-30 17:10:50.569971255 +0000 UTC m=+987.476980724" observedRunningTime="2026-01-30 17:10:51.262247022 +0000 UTC m=+988.169256491" watchObservedRunningTime="2026-01-30 17:10:51.262893618 +0000 UTC m=+988.169903107" Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.287389 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-x5k4p" podStartSLOduration=1.9488479459999999 podStartE2EDuration="6.287368553s" podCreationTimestamp="2026-01-30 17:10:45 +0000 UTC" firstStartedPulling="2026-01-30 17:10:46.257359938 +0000 UTC m=+983.164369407" lastFinishedPulling="2026-01-30 17:10:50.595880535 +0000 UTC m=+987.502890014" observedRunningTime="2026-01-30 17:10:51.283412899 +0000 UTC m=+988.190422378" watchObservedRunningTime="2026-01-30 17:10:51.287368553 +0000 UTC m=+988.194378042" Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.767028 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.887204 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkgcz\" (UniqueName: \"kubernetes.io/projected/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb-kube-api-access-bkgcz\") pod \"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb\" (UID: \"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb\") " Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.893617 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb-kube-api-access-bkgcz" (OuterVolumeSpecName: "kube-api-access-bkgcz") pod "4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" (UID: "4a0e0ba4-1179-4619-8c9b-7606e7e98fdb"). InnerVolumeSpecName "kube-api-access-bkgcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:10:51 crc kubenswrapper[4712]: I0130 17:10:51.988752 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkgcz\" (UniqueName: \"kubernetes.io/projected/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb-kube-api-access-bkgcz\") on node \"crc\" DevicePath \"\"" Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.230925 4712 generic.go:334] "Generic (PLEG): container finished" podID="4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" containerID="4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb" exitCode=0 Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.230988 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28hp4" event={"ID":"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb","Type":"ContainerDied","Data":"4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb"} Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.231366 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-28hp4" event={"ID":"4a0e0ba4-1179-4619-8c9b-7606e7e98fdb","Type":"ContainerDied","Data":"e2ec8cb1ae73c122206e27995f696181b38f1d51e969989a06838741b2233129"} Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.231395 4712 scope.go:117] "RemoveContainer" containerID="4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb" Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.230998 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-28hp4" Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.263037 4712 scope.go:117] "RemoveContainer" containerID="4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb" Jan 30 17:10:52 crc kubenswrapper[4712]: E0130 17:10:52.264310 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb\": container with ID starting with 4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb not found: ID does not exist" containerID="4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb" Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.264417 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb"} err="failed to get container status \"4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb\": rpc error: code = NotFound desc = could not find container \"4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb\": container with ID starting with 4c26ce9cd8c41171aec01f1d5c7fbd69f6388b56514e24d6f3a3494860ad46cb not found: ID does not exist" Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.269704 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-28hp4"] Jan 30 17:10:52 crc kubenswrapper[4712]: I0130 17:10:52.274987 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-28hp4"] Jan 30 17:10:53 crc kubenswrapper[4712]: I0130 17:10:53.241540 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgkqf" event={"ID":"49a81ce4-838e-43ed-9ab5-fe801fc2cc33","Type":"ContainerStarted","Data":"d77badfea90cb8ae60c89825630c16ffc961ecf3dd72011fdb399adc8f27a86f"} Jan 30 17:10:53 crc kubenswrapper[4712]: I0130 17:10:53.262838 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pgkqf" podStartSLOduration=3.321514436 podStartE2EDuration="8.262821279s" podCreationTimestamp="2026-01-30 17:10:45 +0000 UTC" firstStartedPulling="2026-01-30 17:10:47.179215889 +0000 UTC m=+984.086225358" lastFinishedPulling="2026-01-30 17:10:52.120522732 +0000 UTC m=+989.027532201" observedRunningTime="2026-01-30 17:10:53.258555177 +0000 UTC m=+990.165564646" watchObservedRunningTime="2026-01-30 17:10:53.262821279 +0000 UTC m=+990.169830738" Jan 30 17:10:53 crc kubenswrapper[4712]: I0130 17:10:53.808280 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" path="/var/lib/kubelet/pods/4a0e0ba4-1179-4619-8c9b-7606e7e98fdb/volumes" Jan 30 17:10:55 crc kubenswrapper[4712]: I0130 17:10:55.798107 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:55 crc kubenswrapper[4712]: I0130 17:10:55.798554 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:55 crc kubenswrapper[4712]: I0130 17:10:55.833545 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:56 crc kubenswrapper[4712]: I0130 17:10:56.242015 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:56 crc kubenswrapper[4712]: I0130 17:10:56.242071 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:56 crc kubenswrapper[4712]: I0130 17:10:56.289604 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 17:10:56 crc kubenswrapper[4712]: I0130 17:10:56.299831 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:10:56 crc kubenswrapper[4712]: I0130 17:10:56.996607 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j9bpz" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.521754 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n"] Jan 30 17:10:57 crc kubenswrapper[4712]: E0130 17:10:57.522012 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" containerName="registry-server" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.522025 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" containerName="registry-server" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.522146 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a0e0ba4-1179-4619-8c9b-7606e7e98fdb" containerName="registry-server" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.522924 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.525205 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qnfm2" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.538645 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n"] Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.582463 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.582710 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.582932 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjl87\" (UniqueName: \"kubernetes.io/projected/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-kube-api-access-jjl87\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.683679 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.683756 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.683824 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjl87\" (UniqueName: \"kubernetes.io/projected/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-kube-api-access-jjl87\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.684486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-util\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.684617 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-bundle\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.701958 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjl87\" (UniqueName: \"kubernetes.io/projected/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-kube-api-access-jjl87\") pod \"488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:57 crc kubenswrapper[4712]: I0130 17:10:57.884706 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:10:58 crc kubenswrapper[4712]: I0130 17:10:58.288461 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n"] Jan 30 17:10:58 crc kubenswrapper[4712]: W0130 17:10:58.318609 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd6f30a7d_fc2e_4274_a5d9_8ff44755d83d.slice/crio-22bd649e71df3897b8766876a21d05c5a8576a1a66202de49d6df885b60134ab WatchSource:0}: Error finding container 22bd649e71df3897b8766876a21d05c5a8576a1a66202de49d6df885b60134ab: Status 404 returned error can't find the container with id 22bd649e71df3897b8766876a21d05c5a8576a1a66202de49d6df885b60134ab Jan 30 17:10:59 crc kubenswrapper[4712]: I0130 17:10:59.291971 4712 generic.go:334] "Generic (PLEG): container finished" podID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerID="3f7fea8a719b4da6fefaed4b51233e8a7a1b3fef736b5c3e7c4c6b730cf5831e" exitCode=0 Jan 30 17:10:59 crc kubenswrapper[4712]: I0130 17:10:59.292065 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" event={"ID":"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d","Type":"ContainerDied","Data":"3f7fea8a719b4da6fefaed4b51233e8a7a1b3fef736b5c3e7c4c6b730cf5831e"} Jan 30 17:10:59 crc kubenswrapper[4712]: I0130 17:10:59.292654 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" event={"ID":"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d","Type":"ContainerStarted","Data":"22bd649e71df3897b8766876a21d05c5a8576a1a66202de49d6df885b60134ab"} Jan 30 17:11:00 crc kubenswrapper[4712]: I0130 17:11:00.300347 4712 generic.go:334] "Generic (PLEG): container finished" podID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerID="4c69b460fbd4a009be453fbf1585672c63b9d5f3d41d5b38b498fcee01303ecd" exitCode=0 Jan 30 17:11:00 crc kubenswrapper[4712]: I0130 17:11:00.300418 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" event={"ID":"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d","Type":"ContainerDied","Data":"4c69b460fbd4a009be453fbf1585672c63b9d5f3d41d5b38b498fcee01303ecd"} Jan 30 17:11:01 crc kubenswrapper[4712]: I0130 17:11:01.308382 4712 generic.go:334] "Generic (PLEG): container finished" podID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerID="269ee18b3ce6e2eb70e6affdb2acc16f33df3a9d8f14dc9237110d86a27fdba0" exitCode=0 Jan 30 17:11:01 crc kubenswrapper[4712]: I0130 17:11:01.308425 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" event={"ID":"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d","Type":"ContainerDied","Data":"269ee18b3ce6e2eb70e6affdb2acc16f33df3a9d8f14dc9237110d86a27fdba0"} Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.541908 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.653534 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjl87\" (UniqueName: \"kubernetes.io/projected/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-kube-api-access-jjl87\") pod \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.653640 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-util\") pod \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.653686 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-bundle\") pod \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\" (UID: \"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d\") " Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.654643 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-bundle" (OuterVolumeSpecName: "bundle") pod "d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" (UID: "d6f30a7d-fc2e-4274-a5d9-8ff44755d83d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.668066 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-kube-api-access-jjl87" (OuterVolumeSpecName: "kube-api-access-jjl87") pod "d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" (UID: "d6f30a7d-fc2e-4274-a5d9-8ff44755d83d"). InnerVolumeSpecName "kube-api-access-jjl87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.670082 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-util" (OuterVolumeSpecName: "util") pod "d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" (UID: "d6f30a7d-fc2e-4274-a5d9-8ff44755d83d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.755502 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjl87\" (UniqueName: \"kubernetes.io/projected/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-kube-api-access-jjl87\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.755586 4712 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-util\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:02 crc kubenswrapper[4712]: I0130 17:11:02.755605 4712 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/d6f30a7d-fc2e-4274-a5d9-8ff44755d83d-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:03 crc kubenswrapper[4712]: I0130 17:11:03.327434 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" event={"ID":"d6f30a7d-fc2e-4274-a5d9-8ff44755d83d","Type":"ContainerDied","Data":"22bd649e71df3897b8766876a21d05c5a8576a1a66202de49d6df885b60134ab"} Jan 30 17:11:03 crc kubenswrapper[4712]: I0130 17:11:03.327475 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22bd649e71df3897b8766876a21d05c5a8576a1a66202de49d6df885b60134ab" Jan 30 17:11:03 crc kubenswrapper[4712]: I0130 17:11:03.327505 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.300344 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn"] Jan 30 17:11:05 crc kubenswrapper[4712]: E0130 17:11:05.300628 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="extract" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.300643 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="extract" Jan 30 17:11:05 crc kubenswrapper[4712]: E0130 17:11:05.300669 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="util" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.300678 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="util" Jan 30 17:11:05 crc kubenswrapper[4712]: E0130 17:11:05.300690 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="pull" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.300698 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="pull" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.300910 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6f30a7d-fc2e-4274-a5d9-8ff44755d83d" containerName="extract" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.301396 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.303672 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-td7zp" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.326593 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn"] Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.394092 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h249g\" (UniqueName: \"kubernetes.io/projected/16cf8838-73f4-4b47-a0a5-0258974c49db-kube-api-access-h249g\") pod \"openstack-operator-controller-init-5884d87984-t6bbn\" (UID: \"16cf8838-73f4-4b47-a0a5-0258974c49db\") " pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.494828 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h249g\" (UniqueName: \"kubernetes.io/projected/16cf8838-73f4-4b47-a0a5-0258974c49db-kube-api-access-h249g\") pod \"openstack-operator-controller-init-5884d87984-t6bbn\" (UID: \"16cf8838-73f4-4b47-a0a5-0258974c49db\") " pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.517193 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h249g\" (UniqueName: \"kubernetes.io/projected/16cf8838-73f4-4b47-a0a5-0258974c49db-kube-api-access-h249g\") pod \"openstack-operator-controller-init-5884d87984-t6bbn\" (UID: \"16cf8838-73f4-4b47-a0a5-0258974c49db\") " pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.619451 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:05 crc kubenswrapper[4712]: I0130 17:11:05.928478 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn"] Jan 30 17:11:06 crc kubenswrapper[4712]: I0130 17:11:06.270649 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:11:06 crc kubenswrapper[4712]: I0130 17:11:06.270704 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:11:06 crc kubenswrapper[4712]: I0130 17:11:06.293106 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:11:06 crc kubenswrapper[4712]: I0130 17:11:06.348758 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" event={"ID":"16cf8838-73f4-4b47-a0a5-0258974c49db","Type":"ContainerStarted","Data":"72ccca0779b8be8fcbe8a36c63b47cac2d15fbd4899f193caf8ab0a0fe013d59"} Jan 30 17:11:07 crc kubenswrapper[4712]: I0130 17:11:07.668159 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgkqf"] Jan 30 17:11:07 crc kubenswrapper[4712]: I0130 17:11:07.668680 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pgkqf" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="registry-server" containerID="cri-o://d77badfea90cb8ae60c89825630c16ffc961ecf3dd72011fdb399adc8f27a86f" gracePeriod=2 Jan 30 17:11:08 crc kubenswrapper[4712]: I0130 17:11:08.364605 4712 generic.go:334] "Generic (PLEG): container finished" podID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerID="d77badfea90cb8ae60c89825630c16ffc961ecf3dd72011fdb399adc8f27a86f" exitCode=0 Jan 30 17:11:08 crc kubenswrapper[4712]: I0130 17:11:08.364670 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgkqf" event={"ID":"49a81ce4-838e-43ed-9ab5-fe801fc2cc33","Type":"ContainerDied","Data":"d77badfea90cb8ae60c89825630c16ffc961ecf3dd72011fdb399adc8f27a86f"} Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.761141 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.832008 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgclv\" (UniqueName: \"kubernetes.io/projected/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-kube-api-access-lgclv\") pod \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.832138 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-utilities\") pod \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.832156 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-catalog-content\") pod \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\" (UID: \"49a81ce4-838e-43ed-9ab5-fe801fc2cc33\") " Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.833462 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-utilities" (OuterVolumeSpecName: "utilities") pod "49a81ce4-838e-43ed-9ab5-fe801fc2cc33" (UID: "49a81ce4-838e-43ed-9ab5-fe801fc2cc33"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.846672 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-kube-api-access-lgclv" (OuterVolumeSpecName: "kube-api-access-lgclv") pod "49a81ce4-838e-43ed-9ab5-fe801fc2cc33" (UID: "49a81ce4-838e-43ed-9ab5-fe801fc2cc33"). InnerVolumeSpecName "kube-api-access-lgclv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.875181 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49a81ce4-838e-43ed-9ab5-fe801fc2cc33" (UID: "49a81ce4-838e-43ed-9ab5-fe801fc2cc33"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.932841 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.932875 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:14 crc kubenswrapper[4712]: I0130 17:11:14.932888 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgclv\" (UniqueName: \"kubernetes.io/projected/49a81ce4-838e-43ed-9ab5-fe801fc2cc33-kube-api-access-lgclv\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.409821 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" event={"ID":"16cf8838-73f4-4b47-a0a5-0258974c49db","Type":"ContainerStarted","Data":"6d2ae1909024e77cca2de2c63b233d5404dba1deb9767653fdc283ef1c2ba7b7"} Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.410685 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.412406 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pgkqf" event={"ID":"49a81ce4-838e-43ed-9ab5-fe801fc2cc33","Type":"ContainerDied","Data":"71db57566f731d91a1b96162a341c6ee01588aa17a417df5c448736bd0157af1"} Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.412443 4712 scope.go:117] "RemoveContainer" containerID="d77badfea90cb8ae60c89825630c16ffc961ecf3dd72011fdb399adc8f27a86f" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.412565 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pgkqf" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.434201 4712 scope.go:117] "RemoveContainer" containerID="42be3885624a3dc8515032495274d49022d303e98be1f9ffabf220836447ab23" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.456473 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podStartSLOduration=1.564295406 podStartE2EDuration="10.456451048s" podCreationTimestamp="2026-01-30 17:11:05 +0000 UTC" firstStartedPulling="2026-01-30 17:11:05.939695277 +0000 UTC m=+1002.846704746" lastFinishedPulling="2026-01-30 17:11:14.831850909 +0000 UTC m=+1011.738860388" observedRunningTime="2026-01-30 17:11:15.445859494 +0000 UTC m=+1012.352868983" watchObservedRunningTime="2026-01-30 17:11:15.456451048 +0000 UTC m=+1012.363460517" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.468257 4712 scope.go:117] "RemoveContainer" containerID="e1b002720ec777eb81737621681f66e63056fb08fd9b68a64e7be7918b8c11a6" Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.477887 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgkqf"] Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.482300 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pgkqf"] Jan 30 17:11:15 crc kubenswrapper[4712]: I0130 17:11:15.807101 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" path="/var/lib/kubelet/pods/49a81ce4-838e-43ed-9ab5-fe801fc2cc33/volumes" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.498879 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-j5n6z"] Jan 30 17:11:19 crc kubenswrapper[4712]: E0130 17:11:19.499497 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="extract-content" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.499515 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="extract-content" Jan 30 17:11:19 crc kubenswrapper[4712]: E0130 17:11:19.499544 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="extract-utilities" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.499570 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="extract-utilities" Jan 30 17:11:19 crc kubenswrapper[4712]: E0130 17:11:19.499580 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="registry-server" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.499587 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="registry-server" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.499736 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="49a81ce4-838e-43ed-9ab5-fe801fc2cc33" containerName="registry-server" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.500708 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.511210 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j5n6z"] Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.635254 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9wwm\" (UniqueName: \"kubernetes.io/projected/3beadd4c-3e60-44ab-8af3-53d4625bfd50-kube-api-access-h9wwm\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.635328 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-catalog-content\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.635401 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-utilities\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.736210 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9wwm\" (UniqueName: \"kubernetes.io/projected/3beadd4c-3e60-44ab-8af3-53d4625bfd50-kube-api-access-h9wwm\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.736634 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-catalog-content\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.736682 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-utilities\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.737174 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-utilities\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.737382 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-catalog-content\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.771190 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9wwm\" (UniqueName: \"kubernetes.io/projected/3beadd4c-3e60-44ab-8af3-53d4625bfd50-kube-api-access-h9wwm\") pod \"certified-operators-j5n6z\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:19 crc kubenswrapper[4712]: I0130 17:11:19.833895 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:20 crc kubenswrapper[4712]: I0130 17:11:20.203109 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-j5n6z"] Jan 30 17:11:20 crc kubenswrapper[4712]: W0130 17:11:20.211954 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3beadd4c_3e60_44ab_8af3_53d4625bfd50.slice/crio-719233f4a419724a15ced49e66e7ffa128dd073bebcbdfc9393796ff2963708c WatchSource:0}: Error finding container 719233f4a419724a15ced49e66e7ffa128dd073bebcbdfc9393796ff2963708c: Status 404 returned error can't find the container with id 719233f4a419724a15ced49e66e7ffa128dd073bebcbdfc9393796ff2963708c Jan 30 17:11:20 crc kubenswrapper[4712]: I0130 17:11:20.455577 4712 generic.go:334] "Generic (PLEG): container finished" podID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerID="28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1" exitCode=0 Jan 30 17:11:20 crc kubenswrapper[4712]: I0130 17:11:20.455634 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerDied","Data":"28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1"} Jan 30 17:11:20 crc kubenswrapper[4712]: I0130 17:11:20.455927 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerStarted","Data":"719233f4a419724a15ced49e66e7ffa128dd073bebcbdfc9393796ff2963708c"} Jan 30 17:11:21 crc kubenswrapper[4712]: I0130 17:11:21.463276 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerStarted","Data":"9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304"} Jan 30 17:11:22 crc kubenswrapper[4712]: I0130 17:11:22.473602 4712 generic.go:334] "Generic (PLEG): container finished" podID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerID="9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304" exitCode=0 Jan 30 17:11:22 crc kubenswrapper[4712]: I0130 17:11:22.473691 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerDied","Data":"9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304"} Jan 30 17:11:23 crc kubenswrapper[4712]: I0130 17:11:23.484248 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerStarted","Data":"0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26"} Jan 30 17:11:23 crc kubenswrapper[4712]: I0130 17:11:23.528821 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-j5n6z" podStartSLOduration=2.13276104 podStartE2EDuration="4.528783271s" podCreationTimestamp="2026-01-30 17:11:19 +0000 UTC" firstStartedPulling="2026-01-30 17:11:20.457223924 +0000 UTC m=+1017.364233393" lastFinishedPulling="2026-01-30 17:11:22.853246155 +0000 UTC m=+1019.760255624" observedRunningTime="2026-01-30 17:11:23.516301522 +0000 UTC m=+1020.423310991" watchObservedRunningTime="2026-01-30 17:11:23.528783271 +0000 UTC m=+1020.435792740" Jan 30 17:11:25 crc kubenswrapper[4712]: I0130 17:11:25.622585 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 17:11:29 crc kubenswrapper[4712]: I0130 17:11:29.834003 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:29 crc kubenswrapper[4712]: I0130 17:11:29.834709 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:29 crc kubenswrapper[4712]: I0130 17:11:29.871704 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:30 crc kubenswrapper[4712]: I0130 17:11:30.581957 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:30 crc kubenswrapper[4712]: I0130 17:11:30.645394 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j5n6z"] Jan 30 17:11:32 crc kubenswrapper[4712]: I0130 17:11:32.539142 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-j5n6z" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="registry-server" containerID="cri-o://0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26" gracePeriod=2 Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.449779 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.546413 4712 generic.go:334] "Generic (PLEG): container finished" podID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerID="0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26" exitCode=0 Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.546450 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerDied","Data":"0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26"} Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.546478 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-j5n6z" event={"ID":"3beadd4c-3e60-44ab-8af3-53d4625bfd50","Type":"ContainerDied","Data":"719233f4a419724a15ced49e66e7ffa128dd073bebcbdfc9393796ff2963708c"} Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.546488 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-j5n6z" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.546513 4712 scope.go:117] "RemoveContainer" containerID="0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.565147 4712 scope.go:117] "RemoveContainer" containerID="9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.600834 4712 scope.go:117] "RemoveContainer" containerID="28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.618168 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-catalog-content\") pod \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.618295 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9wwm\" (UniqueName: \"kubernetes.io/projected/3beadd4c-3e60-44ab-8af3-53d4625bfd50-kube-api-access-h9wwm\") pod \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.618367 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-utilities\") pod \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\" (UID: \"3beadd4c-3e60-44ab-8af3-53d4625bfd50\") " Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.619084 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-utilities" (OuterVolumeSpecName: "utilities") pod "3beadd4c-3e60-44ab-8af3-53d4625bfd50" (UID: "3beadd4c-3e60-44ab-8af3-53d4625bfd50"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.638348 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3beadd4c-3e60-44ab-8af3-53d4625bfd50-kube-api-access-h9wwm" (OuterVolumeSpecName: "kube-api-access-h9wwm") pod "3beadd4c-3e60-44ab-8af3-53d4625bfd50" (UID: "3beadd4c-3e60-44ab-8af3-53d4625bfd50"). InnerVolumeSpecName "kube-api-access-h9wwm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.650736 4712 scope.go:117] "RemoveContainer" containerID="0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26" Jan 30 17:11:33 crc kubenswrapper[4712]: E0130 17:11:33.655896 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26\": container with ID starting with 0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26 not found: ID does not exist" containerID="0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.656039 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26"} err="failed to get container status \"0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26\": rpc error: code = NotFound desc = could not find container \"0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26\": container with ID starting with 0829c7809999597fe227c7350c58fa1c55599921e4428a4eedc497aaa515db26 not found: ID does not exist" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.656138 4712 scope.go:117] "RemoveContainer" containerID="9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304" Jan 30 17:11:33 crc kubenswrapper[4712]: E0130 17:11:33.660167 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304\": container with ID starting with 9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304 not found: ID does not exist" containerID="9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.660304 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304"} err="failed to get container status \"9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304\": rpc error: code = NotFound desc = could not find container \"9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304\": container with ID starting with 9ef79a37465b94f0af825e8f5ec034bb3b5ce13633724b058e9c36f2aa574304 not found: ID does not exist" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.660379 4712 scope.go:117] "RemoveContainer" containerID="28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1" Jan 30 17:11:33 crc kubenswrapper[4712]: E0130 17:11:33.664713 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1\": container with ID starting with 28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1 not found: ID does not exist" containerID="28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.664757 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1"} err="failed to get container status \"28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1\": rpc error: code = NotFound desc = could not find container \"28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1\": container with ID starting with 28f1250c2620e9dbe61a244f2bd87849480e198405cafcc0739988a904255ed1 not found: ID does not exist" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.696933 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3beadd4c-3e60-44ab-8af3-53d4625bfd50" (UID: "3beadd4c-3e60-44ab-8af3-53d4625bfd50"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.719877 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.720089 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3beadd4c-3e60-44ab-8af3-53d4625bfd50-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.720151 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h9wwm\" (UniqueName: \"kubernetes.io/projected/3beadd4c-3e60-44ab-8af3-53d4625bfd50-kube-api-access-h9wwm\") on node \"crc\" DevicePath \"\"" Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.862353 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-j5n6z"] Jan 30 17:11:33 crc kubenswrapper[4712]: I0130 17:11:33.867577 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-j5n6z"] Jan 30 17:11:35 crc kubenswrapper[4712]: I0130 17:11:35.813367 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" path="/var/lib/kubelet/pods/3beadd4c-3e60-44ab-8af3-53d4625bfd50/volumes" Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.271106 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.271170 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.271214 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.271777 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f74eb8e5d1037eaec314ae58dc333985d1e77823d3293834609e8af2e98478d"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.271867 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://1f74eb8e5d1037eaec314ae58dc333985d1e77823d3293834609e8af2e98478d" gracePeriod=600 Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.568708 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="1f74eb8e5d1037eaec314ae58dc333985d1e77823d3293834609e8af2e98478d" exitCode=0 Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.568752 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"1f74eb8e5d1037eaec314ae58dc333985d1e77823d3293834609e8af2e98478d"} Jan 30 17:11:36 crc kubenswrapper[4712]: I0130 17:11:36.568787 4712 scope.go:117] "RemoveContainer" containerID="2ccb7e72de28daa8c77382ed0d9f3fcdc643489cf9e4bd09a65cf85b38be2156" Jan 30 17:11:37 crc kubenswrapper[4712]: I0130 17:11:37.576658 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"65dbc6a56b610e6c479fb5dd8ad2aa9258f4202d2a0ef57103525088af93b4a2"} Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.762613 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt"] Jan 30 17:11:44 crc kubenswrapper[4712]: E0130 17:11:44.763367 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="registry-server" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.763380 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="registry-server" Jan 30 17:11:44 crc kubenswrapper[4712]: E0130 17:11:44.763390 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="extract-content" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.763395 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="extract-content" Jan 30 17:11:44 crc kubenswrapper[4712]: E0130 17:11:44.763406 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="extract-utilities" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.763411 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="extract-utilities" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.763516 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3beadd4c-3e60-44ab-8af3-53d4625bfd50" containerName="registry-server" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.763954 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.773150 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.773741 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-cxkmb" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.773975 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.775597 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-qllhh" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.779599 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.795051 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.828781 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.829510 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.833191 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-tknc6" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.841763 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.842702 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.848515 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-whw5b" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.862320 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.863116 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.877022 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-87zb8" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.878056 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984h6\" (UniqueName: \"kubernetes.io/projected/6e263552-c0f6-4f24-879f-79895cdbc953-kube-api-access-984h6\") pod \"heat-operator-controller-manager-69d6db494d-lqxpc\" (UID: \"6e263552-c0f6-4f24-879f-79895cdbc953\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.878114 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f6tv\" (UniqueName: \"kubernetes.io/projected/2bc54d51-4f21-479f-a89e-1c60a757433f-kube-api-access-2f6tv\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-tfxdt\" (UID: \"2bc54d51-4f21-479f-a89e-1c60a757433f\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.878168 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfsmx\" (UniqueName: \"kubernetes.io/projected/e1a1d497-2276-4248-9bca-1c7038430933-kube-api-access-hfsmx\") pod \"cinder-operator-controller-manager-8d874c8fc-xfmvz\" (UID: \"e1a1d497-2276-4248-9bca-1c7038430933\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.878199 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jjk6\" (UniqueName: \"kubernetes.io/projected/aa03f8a3-9bea-4b56-92ce-27d1fe53840a-kube-api-access-6jjk6\") pod \"glance-operator-controller-manager-8886f4c47-2h4zg\" (UID: \"aa03f8a3-9bea-4b56-92ce-27d1fe53840a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.891029 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.901108 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.919045 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.922577 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.923521 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.928763 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-d4rv4" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.946411 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.968484 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.969341 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.974457 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6"] Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.975077 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.978131 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-lrwhn" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.979566 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f6tv\" (UniqueName: \"kubernetes.io/projected/2bc54d51-4f21-479f-a89e-1c60a757433f-kube-api-access-2f6tv\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-tfxdt\" (UID: \"2bc54d51-4f21-479f-a89e-1c60a757433f\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.979614 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8czx7\" (UniqueName: \"kubernetes.io/projected/cc62b7c7-5521-41df-bf10-d9cc287fbf7f-kube-api-access-8czx7\") pod \"designate-operator-controller-manager-6d9697b7f4-jkjdt\" (UID: \"cc62b7c7-5521-41df-bf10-d9cc287fbf7f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.979633 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96jkk\" (UniqueName: \"kubernetes.io/projected/5ccbb7b6-e489-4676-8faa-8a0306776a54-kube-api-access-96jkk\") pod \"horizon-operator-controller-manager-5fb775575f-xbk9b\" (UID: \"5ccbb7b6-e489-4676-8faa-8a0306776a54\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.979665 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfsmx\" (UniqueName: \"kubernetes.io/projected/e1a1d497-2276-4248-9bca-1c7038430933-kube-api-access-hfsmx\") pod \"cinder-operator-controller-manager-8d874c8fc-xfmvz\" (UID: \"e1a1d497-2276-4248-9bca-1c7038430933\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.979696 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jjk6\" (UniqueName: \"kubernetes.io/projected/aa03f8a3-9bea-4b56-92ce-27d1fe53840a-kube-api-access-6jjk6\") pod \"glance-operator-controller-manager-8886f4c47-2h4zg\" (UID: \"aa03f8a3-9bea-4b56-92ce-27d1fe53840a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.979721 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-984h6\" (UniqueName: \"kubernetes.io/projected/6e263552-c0f6-4f24-879f-79895cdbc953-kube-api-access-984h6\") pod \"heat-operator-controller-manager-69d6db494d-lqxpc\" (UID: \"6e263552-c0f6-4f24-879f-79895cdbc953\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.985048 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-tqg67" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.985191 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 30 17:11:44 crc kubenswrapper[4712]: I0130 17:11:44.989121 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:44.996223 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.016280 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.016311 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.016379 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.022104 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.022871 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.025321 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jjk6\" (UniqueName: \"kubernetes.io/projected/aa03f8a3-9bea-4b56-92ce-27d1fe53840a-kube-api-access-6jjk6\") pod \"glance-operator-controller-manager-8886f4c47-2h4zg\" (UID: \"aa03f8a3-9bea-4b56-92ce-27d1fe53840a\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.025461 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f6tv\" (UniqueName: \"kubernetes.io/projected/2bc54d51-4f21-479f-a89e-1c60a757433f-kube-api-access-2f6tv\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-tfxdt\" (UID: \"2bc54d51-4f21-479f-a89e-1c60a757433f\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.050752 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-984h6\" (UniqueName: \"kubernetes.io/projected/6e263552-c0f6-4f24-879f-79895cdbc953-kube-api-access-984h6\") pod \"heat-operator-controller-manager-69d6db494d-lqxpc\" (UID: \"6e263552-c0f6-4f24-879f-79895cdbc953\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.051204 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.051977 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.054960 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.055042 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-gb6kp" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.055202 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-5zk87" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.060848 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-hmgxs" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.062428 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfsmx\" (UniqueName: \"kubernetes.io/projected/e1a1d497-2276-4248-9bca-1c7038430933-kube-api-access-hfsmx\") pod \"cinder-operator-controller-manager-8d874c8fc-xfmvz\" (UID: \"e1a1d497-2276-4248-9bca-1c7038430933\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.097908 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.105295 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7r824\" (UniqueName: \"kubernetes.io/projected/d3b1d20e-d20c-40f9-9c2b-314aee2fe51e-kube-api-access-7r824\") pod \"keystone-operator-controller-manager-84f48565d4-l62x6\" (UID: \"d3b1d20e-d20c-40f9-9c2b-314aee2fe51e\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.105463 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9j92\" (UniqueName: \"kubernetes.io/projected/7b99459b-9311-4260-be34-3de859c1e0b0-kube-api-access-g9j92\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.121340 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8czx7\" (UniqueName: \"kubernetes.io/projected/cc62b7c7-5521-41df-bf10-d9cc287fbf7f-kube-api-access-8czx7\") pod \"designate-operator-controller-manager-6d9697b7f4-jkjdt\" (UID: \"cc62b7c7-5521-41df-bf10-d9cc287fbf7f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.121567 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96jkk\" (UniqueName: \"kubernetes.io/projected/5ccbb7b6-e489-4676-8faa-8a0306776a54-kube-api-access-96jkk\") pod \"horizon-operator-controller-manager-5fb775575f-xbk9b\" (UID: \"5ccbb7b6-e489-4676-8faa-8a0306776a54\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.121740 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfngp\" (UniqueName: \"kubernetes.io/projected/c8354464-6e92-4961-833a-414efe43db13-kube-api-access-rfngp\") pod \"mariadb-operator-controller-manager-67bf948998-wp89m\" (UID: \"c8354464-6e92-4961-833a-414efe43db13\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.121963 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffvg\" (UniqueName: \"kubernetes.io/projected/3bfc9890-11b6-4fcf-9458-08dce816b4b9-kube-api-access-6ffvg\") pod \"ironic-operator-controller-manager-5f4b8bd54d-z9d9r\" (UID: \"3bfc9890-11b6-4fcf-9458-08dce816b4b9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.122127 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2ncg\" (UniqueName: \"kubernetes.io/projected/957cefd9-5116-40c3-aaf4-67ba58319ca1-kube-api-access-z2ncg\") pod \"manila-operator-controller-manager-7dd968899f-2n8cf\" (UID: \"957cefd9-5116-40c3-aaf4-67ba58319ca1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.122368 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.164961 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.190693 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96jkk\" (UniqueName: \"kubernetes.io/projected/5ccbb7b6-e489-4676-8faa-8a0306776a54-kube-api-access-96jkk\") pod \"horizon-operator-controller-manager-5fb775575f-xbk9b\" (UID: \"5ccbb7b6-e489-4676-8faa-8a0306776a54\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.194832 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.195097 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.195693 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.201349 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.205753 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-4vn8p" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.209698 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8czx7\" (UniqueName: \"kubernetes.io/projected/cc62b7c7-5521-41df-bf10-d9cc287fbf7f-kube-api-access-8czx7\") pod \"designate-operator-controller-manager-6d9697b7f4-jkjdt\" (UID: \"cc62b7c7-5521-41df-bf10-d9cc287fbf7f\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.221986 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228602 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfngp\" (UniqueName: \"kubernetes.io/projected/c8354464-6e92-4961-833a-414efe43db13-kube-api-access-rfngp\") pod \"mariadb-operator-controller-manager-67bf948998-wp89m\" (UID: \"c8354464-6e92-4961-833a-414efe43db13\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228663 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffvg\" (UniqueName: \"kubernetes.io/projected/3bfc9890-11b6-4fcf-9458-08dce816b4b9-kube-api-access-6ffvg\") pod \"ironic-operator-controller-manager-5f4b8bd54d-z9d9r\" (UID: \"3bfc9890-11b6-4fcf-9458-08dce816b4b9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228696 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2ncg\" (UniqueName: \"kubernetes.io/projected/957cefd9-5116-40c3-aaf4-67ba58319ca1-kube-api-access-z2ncg\") pod \"manila-operator-controller-manager-7dd968899f-2n8cf\" (UID: \"957cefd9-5116-40c3-aaf4-67ba58319ca1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228743 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rfgj\" (UniqueName: \"kubernetes.io/projected/b3222b74-686d-4b44-b521-33fb24c0b403-kube-api-access-2rfgj\") pod \"neutron-operator-controller-manager-585dbc889-7pr55\" (UID: \"b3222b74-686d-4b44-b521-33fb24c0b403\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228816 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228859 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7r824\" (UniqueName: \"kubernetes.io/projected/d3b1d20e-d20c-40f9-9c2b-314aee2fe51e-kube-api-access-7r824\") pod \"keystone-operator-controller-manager-84f48565d4-l62x6\" (UID: \"d3b1d20e-d20c-40f9-9c2b-314aee2fe51e\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.228890 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9j92\" (UniqueName: \"kubernetes.io/projected/7b99459b-9311-4260-be34-3de859c1e0b0-kube-api-access-g9j92\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.229596 4712 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.229649 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert podName:7b99459b-9311-4260-be34-3de859c1e0b0 nodeName:}" failed. No retries permitted until 2026-01-30 17:11:45.729630557 +0000 UTC m=+1042.636640036 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert") pod "infra-operator-controller-manager-79955696d6-lwlhf" (UID: "7b99459b-9311-4260-be34-3de859c1e0b0") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.229821 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.244073 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.247191 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.248983 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.253294 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9j92\" (UniqueName: \"kubernetes.io/projected/7b99459b-9311-4260-be34-3de859c1e0b0-kube-api-access-g9j92\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.255304 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-48pdj" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.269307 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7r824\" (UniqueName: \"kubernetes.io/projected/d3b1d20e-d20c-40f9-9c2b-314aee2fe51e-kube-api-access-7r824\") pod \"keystone-operator-controller-manager-84f48565d4-l62x6\" (UID: \"d3b1d20e-d20c-40f9-9c2b-314aee2fe51e\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.276358 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2ncg\" (UniqueName: \"kubernetes.io/projected/957cefd9-5116-40c3-aaf4-67ba58319ca1-kube-api-access-z2ncg\") pod \"manila-operator-controller-manager-7dd968899f-2n8cf\" (UID: \"957cefd9-5116-40c3-aaf4-67ba58319ca1\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.277877 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffvg\" (UniqueName: \"kubernetes.io/projected/3bfc9890-11b6-4fcf-9458-08dce816b4b9-kube-api-access-6ffvg\") pod \"ironic-operator-controller-manager-5f4b8bd54d-z9d9r\" (UID: \"3bfc9890-11b6-4fcf-9458-08dce816b4b9\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.282742 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.288531 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfngp\" (UniqueName: \"kubernetes.io/projected/c8354464-6e92-4961-833a-414efe43db13-kube-api-access-rfngp\") pod \"mariadb-operator-controller-manager-67bf948998-wp89m\" (UID: \"c8354464-6e92-4961-833a-414efe43db13\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.296870 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.297873 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.306219 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-vbz26" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.317873 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.330812 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rfgj\" (UniqueName: \"kubernetes.io/projected/b3222b74-686d-4b44-b521-33fb24c0b403-kube-api-access-2rfgj\") pod \"neutron-operator-controller-manager-585dbc889-7pr55\" (UID: \"b3222b74-686d-4b44-b521-33fb24c0b403\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.330851 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkzh9\" (UniqueName: \"kubernetes.io/projected/1abbe42a-dbb1-4ec5-8318-451adc608b2b-kube-api-access-zkzh9\") pod \"nova-operator-controller-manager-55bff696bd-kj9k8\" (UID: \"1abbe42a-dbb1-4ec5-8318-451adc608b2b\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.330941 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gslh9\" (UniqueName: \"kubernetes.io/projected/70ad565b-dc4e-4f67-863a-fd29c88ad39d-kube-api-access-gslh9\") pod \"octavia-operator-controller-manager-6687f8d877-jjb4n\" (UID: \"70ad565b-dc4e-4f67-863a-fd29c88ad39d\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.349537 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.350580 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.360083 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.360292 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-nxvh6" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.362154 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.376837 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rfgj\" (UniqueName: \"kubernetes.io/projected/b3222b74-686d-4b44-b521-33fb24c0b403-kube-api-access-2rfgj\") pod \"neutron-operator-controller-manager-585dbc889-7pr55\" (UID: \"b3222b74-686d-4b44-b521-33fb24c0b403\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.394452 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.413874 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-smj59"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.414647 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.425477 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.435494 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gslh9\" (UniqueName: \"kubernetes.io/projected/70ad565b-dc4e-4f67-863a-fd29c88ad39d-kube-api-access-gslh9\") pod \"octavia-operator-controller-manager-6687f8d877-jjb4n\" (UID: \"70ad565b-dc4e-4f67-863a-fd29c88ad39d\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.435543 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6b4q\" (UniqueName: \"kubernetes.io/projected/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-kube-api-access-t6b4q\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.435579 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.435600 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkzh9\" (UniqueName: \"kubernetes.io/projected/1abbe42a-dbb1-4ec5-8318-451adc608b2b-kube-api-access-zkzh9\") pod \"nova-operator-controller-manager-55bff696bd-kj9k8\" (UID: \"1abbe42a-dbb1-4ec5-8318-451adc608b2b\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.463672 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-vltdb" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.466422 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkzh9\" (UniqueName: \"kubernetes.io/projected/1abbe42a-dbb1-4ec5-8318-451adc608b2b-kube-api-access-zkzh9\") pod \"nova-operator-controller-manager-55bff696bd-kj9k8\" (UID: \"1abbe42a-dbb1-4ec5-8318-451adc608b2b\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.466487 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.467947 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.468784 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gslh9\" (UniqueName: \"kubernetes.io/projected/70ad565b-dc4e-4f67-863a-fd29c88ad39d-kube-api-access-gslh9\") pod \"octavia-operator-controller-manager-6687f8d877-jjb4n\" (UID: \"70ad565b-dc4e-4f67-863a-fd29c88ad39d\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.480988 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-smj59"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.492185 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-ddpv5" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.492375 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.492629 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.492642 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.509177 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.512458 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.514147 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.514938 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.518227 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.529045 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.529162 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-pqltf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.529473 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-gpz9f" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.532020 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.535212 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.536728 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.536816 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78g2h\" (UniqueName: \"kubernetes.io/projected/19489158-a72e-4e6d-981a-879b596fb9b8-kube-api-access-78g2h\") pod \"ovn-operator-controller-manager-788c46999f-smj59\" (UID: \"19489158-a72e-4e6d-981a-879b596fb9b8\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.536907 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fnll\" (UniqueName: \"kubernetes.io/projected/adbd0e89-e0e3-46eb-b2c5-4482cc71deae-kube-api-access-6fnll\") pod \"placement-operator-controller-manager-5b964cf4cd-4l4j7\" (UID: \"adbd0e89-e0e3-46eb-b2c5-4482cc71deae\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.536945 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6b4q\" (UniqueName: \"kubernetes.io/projected/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-kube-api-access-t6b4q\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.536955 4712 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.537086 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert podName:d4821c16-36e6-43c6-91f1-5fdf29b5b88a nodeName:}" failed. No retries permitted until 2026-01-30 17:11:46.037071125 +0000 UTC m=+1042.944080594 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" (UID: "d4821c16-36e6-43c6-91f1-5fdf29b5b88a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.584129 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.586255 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.587089 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.597003 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-75nt6" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.597657 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6b4q\" (UniqueName: \"kubernetes.io/projected/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-kube-api-access-t6b4q\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.608914 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.623297 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.637788 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k65tj\" (UniqueName: \"kubernetes.io/projected/d37f95a0-af87-4727-83a4-aa6334b0759e-kube-api-access-k65tj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2x2xt\" (UID: \"d37f95a0-af87-4727-83a4-aa6334b0759e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.638148 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78g2h\" (UniqueName: \"kubernetes.io/projected/19489158-a72e-4e6d-981a-879b596fb9b8-kube-api-access-78g2h\") pod \"ovn-operator-controller-manager-788c46999f-smj59\" (UID: \"19489158-a72e-4e6d-981a-879b596fb9b8\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.638182 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd4dc\" (UniqueName: \"kubernetes.io/projected/6c041737-6e32-468d-aba7-469207eab526-kube-api-access-gd4dc\") pod \"swift-operator-controller-manager-68fc8c869-rfmgz\" (UID: \"6c041737-6e32-468d-aba7-469207eab526\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.638226 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcpcf\" (UniqueName: \"kubernetes.io/projected/a1f37d35-d806-4c98-bdc5-85163d1b180c-kube-api-access-mcpcf\") pod \"test-operator-controller-manager-56f8bfcd9f-78v95\" (UID: \"a1f37d35-d806-4c98-bdc5-85163d1b180c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.638250 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6fnll\" (UniqueName: \"kubernetes.io/projected/adbd0e89-e0e3-46eb-b2c5-4482cc71deae-kube-api-access-6fnll\") pod \"placement-operator-controller-manager-5b964cf4cd-4l4j7\" (UID: \"adbd0e89-e0e3-46eb-b2c5-4482cc71deae\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.653023 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-f4h96"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.654270 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.655457 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-f4h96"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.655881 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-65b9v" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.666878 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fnll\" (UniqueName: \"kubernetes.io/projected/adbd0e89-e0e3-46eb-b2c5-4482cc71deae-kube-api-access-6fnll\") pod \"placement-operator-controller-manager-5b964cf4cd-4l4j7\" (UID: \"adbd0e89-e0e3-46eb-b2c5-4482cc71deae\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.687508 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78g2h\" (UniqueName: \"kubernetes.io/projected/19489158-a72e-4e6d-981a-879b596fb9b8-kube-api-access-78g2h\") pod \"ovn-operator-controller-manager-788c46999f-smj59\" (UID: \"19489158-a72e-4e6d-981a-879b596fb9b8\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.744665 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k65tj\" (UniqueName: \"kubernetes.io/projected/d37f95a0-af87-4727-83a4-aa6334b0759e-kube-api-access-k65tj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2x2xt\" (UID: \"d37f95a0-af87-4727-83a4-aa6334b0759e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.744784 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd4dc\" (UniqueName: \"kubernetes.io/projected/6c041737-6e32-468d-aba7-469207eab526-kube-api-access-gd4dc\") pod \"swift-operator-controller-manager-68fc8c869-rfmgz\" (UID: \"6c041737-6e32-468d-aba7-469207eab526\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.744825 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.744846 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvk4z\" (UniqueName: \"kubernetes.io/projected/f0e6edc2-9ad5-44a9-8737-78cfd077f9b1-kube-api-access-dvk4z\") pod \"watcher-operator-controller-manager-564965969-f4h96\" (UID: \"f0e6edc2-9ad5-44a9-8737-78cfd077f9b1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.744899 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcpcf\" (UniqueName: \"kubernetes.io/projected/a1f37d35-d806-4c98-bdc5-85163d1b180c-kube-api-access-mcpcf\") pod \"test-operator-controller-manager-56f8bfcd9f-78v95\" (UID: \"a1f37d35-d806-4c98-bdc5-85163d1b180c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.746635 4712 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.751025 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert podName:7b99459b-9311-4260-be34-3de859c1e0b0 nodeName:}" failed. No retries permitted until 2026-01-30 17:11:46.750998035 +0000 UTC m=+1043.658007504 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert") pod "infra-operator-controller-manager-79955696d6-lwlhf" (UID: "7b99459b-9311-4260-be34-3de859c1e0b0") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.770315 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.772811 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.784439 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.784640 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-b56mf" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.784776 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.826765 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd4dc\" (UniqueName: \"kubernetes.io/projected/6c041737-6e32-468d-aba7-469207eab526-kube-api-access-gd4dc\") pod \"swift-operator-controller-manager-68fc8c869-rfmgz\" (UID: \"6c041737-6e32-468d-aba7-469207eab526\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.827302 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcpcf\" (UniqueName: \"kubernetes.io/projected/a1f37d35-d806-4c98-bdc5-85163d1b180c-kube-api-access-mcpcf\") pod \"test-operator-controller-manager-56f8bfcd9f-78v95\" (UID: \"a1f37d35-d806-4c98-bdc5-85163d1b180c\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.839994 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k65tj\" (UniqueName: \"kubernetes.io/projected/d37f95a0-af87-4727-83a4-aa6334b0759e-kube-api-access-k65tj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2x2xt\" (UID: \"d37f95a0-af87-4727-83a4-aa6334b0759e\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.846687 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dvk4z\" (UniqueName: \"kubernetes.io/projected/f0e6edc2-9ad5-44a9-8737-78cfd077f9b1-kube-api-access-dvk4z\") pod \"watcher-operator-controller-manager-564965969-f4h96\" (UID: \"f0e6edc2-9ad5-44a9-8737-78cfd077f9b1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.846780 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2d2l\" (UniqueName: \"kubernetes.io/projected/15028a9a-8618-4d65-89ff-d8b06f63821f-kube-api-access-k2d2l\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.846864 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.846900 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.869373 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dvk4z\" (UniqueName: \"kubernetes.io/projected/f0e6edc2-9ad5-44a9-8737-78cfd077f9b1-kube-api-access-dvk4z\") pod \"watcher-operator-controller-manager-564965969-f4h96\" (UID: \"f0e6edc2-9ad5-44a9-8737-78cfd077f9b1\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.906578 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.909687 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.920551 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.920765 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.920817 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.921462 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw"] Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.921536 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.926189 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.932280 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-bjbxc" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.949004 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.949069 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnfxv\" (UniqueName: \"kubernetes.io/projected/3602a87a-8a49-427b-baf0-a534b10e2d5b-kube-api-access-lnfxv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7xzbw\" (UID: \"3602a87a-8a49-427b-baf0-a534b10e2d5b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.949161 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2d2l\" (UniqueName: \"kubernetes.io/projected/15028a9a-8618-4d65-89ff-d8b06f63821f-kube-api-access-k2d2l\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.949209 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.949314 4712 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.949358 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:46.449344131 +0000 UTC m=+1043.356353600 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "webhook-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.950254 4712 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: E0130 17:11:45.950280 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:46.450272313 +0000 UTC m=+1043.357281782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "metrics-server-cert" not found Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.955845 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:11:45 crc kubenswrapper[4712]: I0130 17:11:45.987951 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2d2l\" (UniqueName: \"kubernetes.io/projected/15028a9a-8618-4d65-89ff-d8b06f63821f-kube-api-access-k2d2l\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.013086 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.054501 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.054561 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnfxv\" (UniqueName: \"kubernetes.io/projected/3602a87a-8a49-427b-baf0-a534b10e2d5b-kube-api-access-lnfxv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7xzbw\" (UID: \"3602a87a-8a49-427b-baf0-a534b10e2d5b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.055362 4712 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.055398 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert podName:d4821c16-36e6-43c6-91f1-5fdf29b5b88a nodeName:}" failed. No retries permitted until 2026-01-30 17:11:47.055385329 +0000 UTC m=+1043.962394798 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" (UID: "d4821c16-36e6-43c6-91f1-5fdf29b5b88a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.152900 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnfxv\" (UniqueName: \"kubernetes.io/projected/3602a87a-8a49-427b-baf0-a534b10e2d5b-kube-api-access-lnfxv\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7xzbw\" (UID: \"3602a87a-8a49-427b-baf0-a534b10e2d5b\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.276669 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.366708 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt"] Jan 30 17:11:46 crc kubenswrapper[4712]: W0130 17:11:46.441887 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2bc54d51_4f21_479f_a89e_1c60a757433f.slice/crio-28b8c85ec76f2f5963168b53e4920dd2a60be249cb503555cb10e51690689fa7 WatchSource:0}: Error finding container 28b8c85ec76f2f5963168b53e4920dd2a60be249cb503555cb10e51690689fa7: Status 404 returned error can't find the container with id 28b8c85ec76f2f5963168b53e4920dd2a60be249cb503555cb10e51690689fa7 Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.460553 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.460610 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.460789 4712 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.460855 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:47.460840912 +0000 UTC m=+1044.367850381 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "metrics-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.462703 4712 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.462773 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:47.462755028 +0000 UTC m=+1044.369764497 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "webhook-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.566639 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg"] Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.592017 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz"] Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.599950 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc"] Jan 30 17:11:46 crc kubenswrapper[4712]: W0130 17:11:46.667770 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1a1d497_2276_4248_9bca_1c7038430933.slice/crio-01ea6508b51592872af0e31960310de72c28994bff8138b83da680cc4b1331b8 WatchSource:0}: Error finding container 01ea6508b51592872af0e31960310de72c28994bff8138b83da680cc4b1331b8: Status 404 returned error can't find the container with id 01ea6508b51592872af0e31960310de72c28994bff8138b83da680cc4b1331b8 Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.702159 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" event={"ID":"2bc54d51-4f21-479f-a89e-1c60a757433f","Type":"ContainerStarted","Data":"28b8c85ec76f2f5963168b53e4920dd2a60be249cb503555cb10e51690689fa7"} Jan 30 17:11:46 crc kubenswrapper[4712]: I0130 17:11:46.766462 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.766649 4712 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:46 crc kubenswrapper[4712]: E0130 17:11:46.766700 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert podName:7b99459b-9311-4260-be34-3de859c1e0b0 nodeName:}" failed. No retries permitted until 2026-01-30 17:11:48.766683071 +0000 UTC m=+1045.673692540 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert") pod "infra-operator-controller-manager-79955696d6-lwlhf" (UID: "7b99459b-9311-4260-be34-3de859c1e0b0") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.071982 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.072127 4712 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.072172 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert podName:d4821c16-36e6-43c6-91f1-5fdf29b5b88a nodeName:}" failed. No retries permitted until 2026-01-30 17:11:49.072159352 +0000 UTC m=+1045.979168821 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" (UID: "d4821c16-36e6-43c6-91f1-5fdf29b5b88a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.165852 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.170095 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3222b74_686d_4b44_b521_33fb24c0b403.slice/crio-04d51bbcf357186c002f4bae45553deb4366c1bb91feccc1080a771e7086c327 WatchSource:0}: Error finding container 04d51bbcf357186c002f4bae45553deb4366c1bb91feccc1080a771e7086c327: Status 404 returned error can't find the container with id 04d51bbcf357186c002f4bae45553deb4366c1bb91feccc1080a771e7086c327 Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.256063 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.268194 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.281440 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod957cefd9_5116_40c3_aaf4_67ba58319ca1.slice/crio-ae222a32767a36167560dda8f9a22ae6eda12ddfae7d36bbd96e71b9ff659194 WatchSource:0}: Error finding container ae222a32767a36167560dda8f9a22ae6eda12ddfae7d36bbd96e71b9ff659194: Status 404 returned error can't find the container with id ae222a32767a36167560dda8f9a22ae6eda12ddfae7d36bbd96e71b9ff659194 Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.321875 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.339970 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.363540 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.384903 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3bfc9890_11b6_4fcf_9458_08dce816b4b9.slice/crio-eb4b4b2f91a27f3a3200d39b55b9c2fd5e0b461c25821c1069d68099cc422f75 WatchSource:0}: Error finding container eb4b4b2f91a27f3a3200d39b55b9c2fd5e0b461c25821c1069d68099cc422f75: Status 404 returned error can't find the container with id eb4b4b2f91a27f3a3200d39b55b9c2fd5e0b461c25821c1069d68099cc422f75 Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.388938 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.398388 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc8354464_6e92_4961_833a_414efe43db13.slice/crio-cb9276931d1dc9e18c37b2fcb540591d6e9e204b08185c4c9470962cd160ca61 WatchSource:0}: Error finding container cb9276931d1dc9e18c37b2fcb540591d6e9e204b08185c4c9470962cd160ca61: Status 404 returned error can't find the container with id cb9276931d1dc9e18c37b2fcb540591d6e9e204b08185c4c9470962cd160ca61 Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.408454 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.410050 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3b1d20e_d20c_40f9_9c2b_314aee2fe51e.slice/crio-4a852e491750bbd192827aa604c6df976829e339b43350286474db723d149b65 WatchSource:0}: Error finding container 4a852e491750bbd192827aa604c6df976829e339b43350286474db723d149b65: Status 404 returned error can't find the container with id 4a852e491750bbd192827aa604c6df976829e339b43350286474db723d149b65 Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.417385 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.428207 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.434696 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcc62b7c7_5521_41df_bf10_d9cc287fbf7f.slice/crio-d3593ba4d7b49855e5733954ba2fe1ea93008a5b3dc2309f61cb13be6e3ad8c1 WatchSource:0}: Error finding container d3593ba4d7b49855e5733954ba2fe1ea93008a5b3dc2309f61cb13be6e3ad8c1: Status 404 returned error can't find the container with id d3593ba4d7b49855e5733954ba2fe1ea93008a5b3dc2309f61cb13be6e3ad8c1 Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.478625 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.478944 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.479122 4712 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.479173 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:49.479156981 +0000 UTC m=+1046.386166450 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "metrics-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.479460 4712 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.479487 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:49.479479589 +0000 UTC m=+1046.386489048 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "webhook-server-cert" not found Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.598523 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.613007 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podadbd0e89_e0e3_46eb_b2c5_4482cc71deae.slice/crio-e337bee6ea00662cee7dc39c5bad4d1dd81a45b3354ceb25c7a5520a28466bce WatchSource:0}: Error finding container e337bee6ea00662cee7dc39c5bad4d1dd81a45b3354ceb25c7a5520a28466bce: Status 404 returned error can't find the container with id e337bee6ea00662cee7dc39c5bad4d1dd81a45b3354ceb25c7a5520a28466bce Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.616243 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-smj59"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.626376 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-f4h96"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.634270 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.638762 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz"] Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.643929 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95"] Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.655827 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f37d35_d806_4c98_bdc5_85163d1b180c.slice/crio-ced59ebfa0b066a07b04ba2081b842fef981fb1d4ff38d3bebbb20d1e62b3500 WatchSource:0}: Error finding container ced59ebfa0b066a07b04ba2081b842fef981fb1d4ff38d3bebbb20d1e62b3500: Status 404 returned error can't find the container with id ced59ebfa0b066a07b04ba2081b842fef981fb1d4ff38d3bebbb20d1e62b3500 Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.659572 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-78g2h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-smj59_openstack-operators(19489158-a72e-4e6d-981a-879b596fb9b8): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.659575 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gd4dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-rfmgz_openstack-operators(6c041737-6e32-468d-aba7-469207eab526): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.660885 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.660954 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" podUID="19489158-a72e-4e6d-981a-879b596fb9b8" Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.661825 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e6edc2_9ad5_44a9_8737_78cfd077f9b1.slice/crio-e071deb687e7c30d034d7bd4cb9191297a050dcd42cf2e8aeca197f37a6e4044 WatchSource:0}: Error finding container e071deb687e7c30d034d7bd4cb9191297a050dcd42cf2e8aeca197f37a6e4044: Status 404 returned error can't find the container with id e071deb687e7c30d034d7bd4cb9191297a050dcd42cf2e8aeca197f37a6e4044 Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.664759 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcpcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-78v95_openstack-operators(a1f37d35-d806-4c98-bdc5-85163d1b180c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.665370 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvk4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-f4h96_openstack-operators(f0e6edc2-9ad5-44a9-8737-78cfd077f9b1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.666940 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.670356 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" Jan 30 17:11:47 crc kubenswrapper[4712]: W0130 17:11:47.671004 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3602a87a_8a49_427b_baf0_a534b10e2d5b.slice/crio-eaa10e597796cf0c6d17b685dbffbc4419014f2dcdd2e797dcdca669b5412c4b WatchSource:0}: Error finding container eaa10e597796cf0c6d17b685dbffbc4419014f2dcdd2e797dcdca669b5412c4b: Status 404 returned error can't find the container with id eaa10e597796cf0c6d17b685dbffbc4419014f2dcdd2e797dcdca669b5412c4b Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.674195 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7xzbw_openstack-operators(3602a87a-8a49-427b-baf0-a534b10e2d5b): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.676457 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" podUID="3602a87a-8a49-427b-baf0-a534b10e2d5b" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.713464 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" event={"ID":"5ccbb7b6-e489-4676-8faa-8a0306776a54","Type":"ContainerStarted","Data":"7088a2fe3e6154668b4cecd5f03e54a742f821a162a00fbc6cab0aed2f6361ec"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.718065 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" event={"ID":"f0e6edc2-9ad5-44a9-8737-78cfd077f9b1","Type":"ContainerStarted","Data":"e071deb687e7c30d034d7bd4cb9191297a050dcd42cf2e8aeca197f37a6e4044"} Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.721598 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.724003 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" event={"ID":"d37f95a0-af87-4727-83a4-aa6334b0759e","Type":"ContainerStarted","Data":"11c82dcc71d7084f2cac615e4e1fdfdab7c390c765f23df52225ec6a8bf4a6e0"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.725178 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" event={"ID":"e1a1d497-2276-4248-9bca-1c7038430933","Type":"ContainerStarted","Data":"01ea6508b51592872af0e31960310de72c28994bff8138b83da680cc4b1331b8"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.726839 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" event={"ID":"b3222b74-686d-4b44-b521-33fb24c0b403","Type":"ContainerStarted","Data":"04d51bbcf357186c002f4bae45553deb4366c1bb91feccc1080a771e7086c327"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.728096 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" event={"ID":"1abbe42a-dbb1-4ec5-8318-451adc608b2b","Type":"ContainerStarted","Data":"4ed0818c0e76cd38d46a7e00a09ea53765f089c5c2a38db8950d3c2bfc645120"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.729324 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" event={"ID":"19489158-a72e-4e6d-981a-879b596fb9b8","Type":"ContainerStarted","Data":"84cff1acd08431038445e7c5fa46314941b99b5292bc7ca81d0444d41ced213d"} Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.731014 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" podUID="19489158-a72e-4e6d-981a-879b596fb9b8" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.731912 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" event={"ID":"6c041737-6e32-468d-aba7-469207eab526","Type":"ContainerStarted","Data":"2a2ce983cd466f34ba858ca12f25fc622d09d3ec5376ba836de931c3805ca6a3"} Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.741095 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.747303 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" event={"ID":"3602a87a-8a49-427b-baf0-a534b10e2d5b","Type":"ContainerStarted","Data":"eaa10e597796cf0c6d17b685dbffbc4419014f2dcdd2e797dcdca669b5412c4b"} Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.749431 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" podUID="3602a87a-8a49-427b-baf0-a534b10e2d5b" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.758251 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" event={"ID":"957cefd9-5116-40c3-aaf4-67ba58319ca1","Type":"ContainerStarted","Data":"ae222a32767a36167560dda8f9a22ae6eda12ddfae7d36bbd96e71b9ff659194"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.765087 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" event={"ID":"d3b1d20e-d20c-40f9-9c2b-314aee2fe51e","Type":"ContainerStarted","Data":"4a852e491750bbd192827aa604c6df976829e339b43350286474db723d149b65"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.771557 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" event={"ID":"6e263552-c0f6-4f24-879f-79895cdbc953","Type":"ContainerStarted","Data":"b6044be246c097d710fa4f15d5d279af8fddd1bb5fc600b9dbf670783c325aa4"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.773222 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" event={"ID":"3bfc9890-11b6-4fcf-9458-08dce816b4b9","Type":"ContainerStarted","Data":"eb4b4b2f91a27f3a3200d39b55b9c2fd5e0b461c25821c1069d68099cc422f75"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.777596 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" event={"ID":"c8354464-6e92-4961-833a-414efe43db13","Type":"ContainerStarted","Data":"cb9276931d1dc9e18c37b2fcb540591d6e9e204b08185c4c9470962cd160ca61"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.782562 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" event={"ID":"aa03f8a3-9bea-4b56-92ce-27d1fe53840a","Type":"ContainerStarted","Data":"87b20023f1ce84d3d5eb7663dd98a8320add5863042f117b832a1f174ea31ad0"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.786530 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" event={"ID":"adbd0e89-e0e3-46eb-b2c5-4482cc71deae","Type":"ContainerStarted","Data":"e337bee6ea00662cee7dc39c5bad4d1dd81a45b3354ceb25c7a5520a28466bce"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.788879 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" event={"ID":"cc62b7c7-5521-41df-bf10-d9cc287fbf7f","Type":"ContainerStarted","Data":"d3593ba4d7b49855e5733954ba2fe1ea93008a5b3dc2309f61cb13be6e3ad8c1"} Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.796317 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" event={"ID":"70ad565b-dc4e-4f67-863a-fd29c88ad39d","Type":"ContainerStarted","Data":"7e8ac0d3f75dbd6fa5ba440fe71e91512a02331d4f7548e7c045b3d054b547f1"} Jan 30 17:11:47 crc kubenswrapper[4712]: E0130 17:11:47.807427 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" Jan 30 17:11:47 crc kubenswrapper[4712]: I0130 17:11:47.823419 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" event={"ID":"a1f37d35-d806-4c98-bdc5-85163d1b180c","Type":"ContainerStarted","Data":"ced59ebfa0b066a07b04ba2081b842fef981fb1d4ff38d3bebbb20d1e62b3500"} Jan 30 17:11:48 crc kubenswrapper[4712]: I0130 17:11:48.802329 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.802762 4712 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.802882 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert podName:7b99459b-9311-4260-be34-3de859c1e0b0 nodeName:}" failed. No retries permitted until 2026-01-30 17:11:52.802866549 +0000 UTC m=+1049.709876018 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert") pod "infra-operator-controller-manager-79955696d6-lwlhf" (UID: "7b99459b-9311-4260-be34-3de859c1e0b0") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.815284 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" podUID="3602a87a-8a49-427b-baf0-a534b10e2d5b" Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.815606 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.815650 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" podUID="19489158-a72e-4e6d-981a-879b596fb9b8" Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.815696 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" Jan 30 17:11:48 crc kubenswrapper[4712]: E0130 17:11:48.815736 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" Jan 30 17:11:49 crc kubenswrapper[4712]: I0130 17:11:49.107344 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:49 crc kubenswrapper[4712]: E0130 17:11:49.107716 4712 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:49 crc kubenswrapper[4712]: E0130 17:11:49.107788 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert podName:d4821c16-36e6-43c6-91f1-5fdf29b5b88a nodeName:}" failed. No retries permitted until 2026-01-30 17:11:53.107771076 +0000 UTC m=+1050.014780545 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" (UID: "d4821c16-36e6-43c6-91f1-5fdf29b5b88a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:49 crc kubenswrapper[4712]: I0130 17:11:49.519905 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:49 crc kubenswrapper[4712]: I0130 17:11:49.519958 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:49 crc kubenswrapper[4712]: E0130 17:11:49.520086 4712 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:11:49 crc kubenswrapper[4712]: E0130 17:11:49.520129 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:53.520116425 +0000 UTC m=+1050.427125894 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "metrics-server-cert" not found Jan 30 17:11:49 crc kubenswrapper[4712]: E0130 17:11:49.520407 4712 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:11:49 crc kubenswrapper[4712]: E0130 17:11:49.520494 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:11:53.520472723 +0000 UTC m=+1050.427482212 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "webhook-server-cert" not found Jan 30 17:11:52 crc kubenswrapper[4712]: I0130 17:11:52.882128 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:11:52 crc kubenswrapper[4712]: E0130 17:11:52.882315 4712 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:52 crc kubenswrapper[4712]: E0130 17:11:52.882619 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert podName:7b99459b-9311-4260-be34-3de859c1e0b0 nodeName:}" failed. No retries permitted until 2026-01-30 17:12:00.882595523 +0000 UTC m=+1057.789604992 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert") pod "infra-operator-controller-manager-79955696d6-lwlhf" (UID: "7b99459b-9311-4260-be34-3de859c1e0b0") : secret "infra-operator-webhook-server-cert" not found Jan 30 17:11:53 crc kubenswrapper[4712]: I0130 17:11:53.186159 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:11:53 crc kubenswrapper[4712]: E0130 17:11:53.186314 4712 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:53 crc kubenswrapper[4712]: E0130 17:11:53.186359 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert podName:d4821c16-36e6-43c6-91f1-5fdf29b5b88a nodeName:}" failed. No retries permitted until 2026-01-30 17:12:01.186345613 +0000 UTC m=+1058.093355072 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" (UID: "d4821c16-36e6-43c6-91f1-5fdf29b5b88a") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 30 17:11:53 crc kubenswrapper[4712]: I0130 17:11:53.592681 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:53 crc kubenswrapper[4712]: I0130 17:11:53.592756 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:11:53 crc kubenswrapper[4712]: E0130 17:11:53.592873 4712 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 30 17:11:53 crc kubenswrapper[4712]: E0130 17:11:53.592964 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:12:01.592943604 +0000 UTC m=+1058.499953083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "webhook-server-cert" not found Jan 30 17:11:53 crc kubenswrapper[4712]: E0130 17:11:53.592899 4712 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 30 17:11:53 crc kubenswrapper[4712]: E0130 17:11:53.593035 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs podName:15028a9a-8618-4d65-89ff-d8b06f63821f nodeName:}" failed. No retries permitted until 2026-01-30 17:12:01.593019885 +0000 UTC m=+1058.500029354 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs") pod "openstack-operator-controller-manager-659668d854-w9hqw" (UID: "15028a9a-8618-4d65-89ff-d8b06f63821f") : secret "metrics-server-cert" not found Jan 30 17:12:00 crc kubenswrapper[4712]: I0130 17:12:00.898176 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:12:00 crc kubenswrapper[4712]: I0130 17:12:00.904487 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/7b99459b-9311-4260-be34-3de859c1e0b0-cert\") pod \"infra-operator-controller-manager-79955696d6-lwlhf\" (UID: \"7b99459b-9311-4260-be34-3de859c1e0b0\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:12:00 crc kubenswrapper[4712]: I0130 17:12:00.923446 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.219333 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.223012 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/d4821c16-36e6-43c6-91f1-5fdf29b5b88a-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2\" (UID: \"d4821c16-36e6-43c6-91f1-5fdf29b5b88a\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.281310 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.623314 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.623363 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.630934 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-metrics-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.636441 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/15028a9a-8618-4d65-89ff-d8b06f63821f-webhook-certs\") pod \"openstack-operator-controller-manager-659668d854-w9hqw\" (UID: \"15028a9a-8618-4d65-89ff-d8b06f63821f\") " pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:01 crc kubenswrapper[4712]: I0130 17:12:01.867863 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:03 crc kubenswrapper[4712]: E0130 17:12:03.893679 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be" Jan 30 17:12:03 crc kubenswrapper[4712]: E0130 17:12:03.894171 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gslh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-6687f8d877-jjb4n_openstack-operators(70ad565b-dc4e-4f67-863a-fd29c88ad39d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:03 crc kubenswrapper[4712]: E0130 17:12:03.895466 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" podUID="70ad565b-dc4e-4f67-863a-fd29c88ad39d" Jan 30 17:12:03 crc kubenswrapper[4712]: E0130 17:12:03.910816 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:e6f2f361f1dcbb321407a5884951e16ff96e7b88942b10b548f27ad4de14a0be\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" podUID="70ad565b-dc4e-4f67-863a-fd29c88ad39d" Jan 30 17:12:04 crc kubenswrapper[4712]: E0130 17:12:04.602946 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488" Jan 30 17:12:04 crc kubenswrapper[4712]: E0130 17:12:04.603148 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6fnll,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-5b964cf4cd-4l4j7_openstack-operators(adbd0e89-e0e3-46eb-b2c5-4482cc71deae): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:04 crc kubenswrapper[4712]: E0130 17:12:04.605918 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" podUID="adbd0e89-e0e3-46eb-b2c5-4482cc71deae" Jan 30 17:12:04 crc kubenswrapper[4712]: E0130 17:12:04.915869 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:e0824d5d461ada59715eb3048ed9394c80abba09c45503f8f90ee3b34e525488\\\"\"" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" podUID="adbd0e89-e0e3-46eb-b2c5-4482cc71deae" Jan 30 17:12:05 crc kubenswrapper[4712]: E0130 17:12:05.463753 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4" Jan 30 17:12:05 crc kubenswrapper[4712]: E0130 17:12:05.463983 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6jjk6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod glance-operator-controller-manager-8886f4c47-2h4zg_openstack-operators(aa03f8a3-9bea-4b56-92ce-27d1fe53840a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:05 crc kubenswrapper[4712]: E0130 17:12:05.465862 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" podUID="aa03f8a3-9bea-4b56-92ce-27d1fe53840a" Jan 30 17:12:05 crc kubenswrapper[4712]: E0130 17:12:05.934694 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/glance-operator@sha256:1f593e8d49d02b6484c89632192ae54771675c54fbd8426e3675b8e20ecfd7c4\\\"\"" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" podUID="aa03f8a3-9bea-4b56-92ce-27d1fe53840a" Jan 30 17:12:09 crc kubenswrapper[4712]: E0130 17:12:09.082248 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a" Jan 30 17:12:09 crc kubenswrapper[4712]: E0130 17:12:09.083095 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k65tj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-64b5b76f97-2x2xt_openstack-operators(d37f95a0-af87-4727-83a4-aa6334b0759e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:09 crc kubenswrapper[4712]: E0130 17:12:09.086542 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" podUID="d37f95a0-af87-4727-83a4-aa6334b0759e" Jan 30 17:12:09 crc kubenswrapper[4712]: E0130 17:12:09.954761 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:f9bf288cd0c13912404027a58ea3b90d4092b641e8265adc5c88644ea7fe901a\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" podUID="d37f95a0-af87-4727-83a4-aa6334b0759e" Jan 30 17:12:14 crc kubenswrapper[4712]: E0130 17:12:14.695474 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8" Jan 30 17:12:14 crc kubenswrapper[4712]: E0130 17:12:14.696974 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-96jkk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-5fb775575f-xbk9b_openstack-operators(5ccbb7b6-e489-4676-8faa-8a0306776a54): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:14 crc kubenswrapper[4712]: E0130 17:12:14.699752 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" podUID="5ccbb7b6-e489-4676-8faa-8a0306776a54" Jan 30 17:12:14 crc kubenswrapper[4712]: E0130 17:12:14.984529 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:027cd7ab61ef5071d9ad6b729c95a98e51cd254642f01dc019d44cc98a9232f8\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" podUID="5ccbb7b6-e489-4676-8faa-8a0306776a54" Jan 30 17:12:15 crc kubenswrapper[4712]: E0130 17:12:15.217719 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10" Jan 30 17:12:15 crc kubenswrapper[4712]: E0130 17:12:15.217923 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-984h6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-69d6db494d-lqxpc_openstack-operators(6e263552-c0f6-4f24-879f-79895cdbc953): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:15 crc kubenswrapper[4712]: E0130 17:12:15.219897 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" podUID="6e263552-c0f6-4f24-879f-79895cdbc953" Jan 30 17:12:15 crc kubenswrapper[4712]: E0130 17:12:15.991032 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:27d83ada27cf70cda0c5738f97551d81f1ea4068e83a090f3312e22172d72e10\\\"\"" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" podUID="6e263552-c0f6-4f24-879f-79895cdbc953" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.230428 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.230779 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2ncg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-7dd968899f-2n8cf_openstack-operators(957cefd9-5116-40c3-aaf4-67ba58319ca1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.232000 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" podUID="957cefd9-5116-40c3-aaf4-67ba58319ca1" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.715391 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.715627 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6ffvg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ironic-operator-controller-manager-5f4b8bd54d-z9d9r_openstack-operators(3bfc9890-11b6-4fcf-9458-08dce816b4b9): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.717116 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" podUID="3bfc9890-11b6-4fcf-9458-08dce816b4b9" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.999298 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:cd911e8d7a7a1104d77691dbaaf54370015cbb82859337746db5a9186d5dc566\\\"\"" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" podUID="957cefd9-5116-40c3-aaf4-67ba58319ca1" Jan 30 17:12:16 crc kubenswrapper[4712]: E0130 17:12:16.999696 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ironic-operator@sha256:bead175f27e5f074f723694f3b66e5aa7238411bf8a27a267b9a2936e4465521\\\"\"" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" podUID="3bfc9890-11b6-4fcf-9458-08dce816b4b9" Jan 30 17:12:17 crc kubenswrapper[4712]: E0130 17:12:17.225941 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf" Jan 30 17:12:17 crc kubenswrapper[4712]: E0130 17:12:17.226172 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rfngp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-67bf948998-wp89m_openstack-operators(c8354464-6e92-4961-833a-414efe43db13): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:17 crc kubenswrapper[4712]: E0130 17:12:17.227406 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" podUID="c8354464-6e92-4961-833a-414efe43db13" Jan 30 17:12:17 crc kubenswrapper[4712]: E0130 17:12:17.681609 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6" Jan 30 17:12:17 crc kubenswrapper[4712]: E0130 17:12:17.681775 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2rfgj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-7pr55_openstack-operators(b3222b74-686d-4b44-b521-33fb24c0b403): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:17 crc kubenswrapper[4712]: E0130 17:12:17.683640 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podUID="b3222b74-686d-4b44-b521-33fb24c0b403" Jan 30 17:12:18 crc kubenswrapper[4712]: E0130 17:12:18.007076 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:2d493137559b74e23edb4788b7fbdb38b3e239df0f2d7e6e540e50b2355fc3cf\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" podUID="c8354464-6e92-4961-833a-414efe43db13" Jan 30 17:12:18 crc kubenswrapper[4712]: E0130 17:12:18.011619 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podUID="b3222b74-686d-4b44-b521-33fb24c0b403" Jan 30 17:12:22 crc kubenswrapper[4712]: E0130 17:12:22.304052 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 30 17:12:22 crc kubenswrapper[4712]: E0130 17:12:22.304833 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zkzh9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-kj9k8_openstack-operators(1abbe42a-dbb1-4ec5-8318-451adc608b2b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:22 crc kubenswrapper[4712]: E0130 17:12:22.305975 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" podUID="1abbe42a-dbb1-4ec5-8318-451adc608b2b" Jan 30 17:12:23 crc kubenswrapper[4712]: E0130 17:12:23.035553 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" podUID="1abbe42a-dbb1-4ec5-8318-451adc608b2b" Jan 30 17:12:23 crc kubenswrapper[4712]: E0130 17:12:23.926578 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 30 17:12:23 crc kubenswrapper[4712]: E0130 17:12:23.927992 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-dvk4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-f4h96_openstack-operators(f0e6edc2-9ad5-44a9-8737-78cfd077f9b1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:23 crc kubenswrapper[4712]: E0130 17:12:23.929270 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" Jan 30 17:12:24 crc kubenswrapper[4712]: E0130 17:12:24.527410 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17" Jan 30 17:12:24 crc kubenswrapper[4712]: E0130 17:12:24.527613 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7r824,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-84f48565d4-l62x6_openstack-operators(d3b1d20e-d20c-40f9-9c2b-314aee2fe51e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:24 crc kubenswrapper[4712]: E0130 17:12:24.528791 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podUID="d3b1d20e-d20c-40f9-9c2b-314aee2fe51e" Jan 30 17:12:25 crc kubenswrapper[4712]: E0130 17:12:25.005268 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382" Jan 30 17:12:25 crc kubenswrapper[4712]: E0130 17:12:25.005411 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gd4dc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-rfmgz_openstack-operators(6c041737-6e32-468d-aba7-469207eab526): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:25 crc kubenswrapper[4712]: E0130 17:12:25.007632 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" Jan 30 17:12:25 crc kubenswrapper[4712]: E0130 17:12:25.045885 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:319c969e88f109b26487a9f5a67203682803d7386424703ab7ca0340be99ae17\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podUID="d3b1d20e-d20c-40f9-9c2b-314aee2fe51e" Jan 30 17:12:27 crc kubenswrapper[4712]: E0130 17:12:27.812203 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241" Jan 30 17:12:27 crc kubenswrapper[4712]: E0130 17:12:27.812641 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mcpcf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-78v95_openstack-operators(a1f37d35-d806-4c98-bdc5-85163d1b180c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:27 crc kubenswrapper[4712]: E0130 17:12:27.813977 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" Jan 30 17:12:30 crc kubenswrapper[4712]: E0130 17:12:30.794138 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 30 17:12:30 crc kubenswrapper[4712]: E0130 17:12:30.794722 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lnfxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7xzbw_openstack-operators(3602a87a-8a49-427b-baf0-a534b10e2d5b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:12:30 crc kubenswrapper[4712]: E0130 17:12:30.796215 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" podUID="3602a87a-8a49-427b-baf0-a534b10e2d5b" Jan 30 17:12:31 crc kubenswrapper[4712]: I0130 17:12:31.335322 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2"] Jan 30 17:12:31 crc kubenswrapper[4712]: W0130 17:12:31.356429 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4821c16_36e6_43c6_91f1_5fdf29b5b88a.slice/crio-260cdb1246676f00db73a3565d30ac498f0dedfb966985110009dea2e431e840 WatchSource:0}: Error finding container 260cdb1246676f00db73a3565d30ac498f0dedfb966985110009dea2e431e840: Status 404 returned error can't find the container with id 260cdb1246676f00db73a3565d30ac498f0dedfb966985110009dea2e431e840 Jan 30 17:12:31 crc kubenswrapper[4712]: I0130 17:12:31.416311 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf"] Jan 30 17:12:31 crc kubenswrapper[4712]: W0130 17:12:31.461930 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7b99459b_9311_4260_be34_3de859c1e0b0.slice/crio-d1daafd55175405472608d792aae4026346e43fea0895c293d79c271eae977b8 WatchSource:0}: Error finding container d1daafd55175405472608d792aae4026346e43fea0895c293d79c271eae977b8: Status 404 returned error can't find the container with id d1daafd55175405472608d792aae4026346e43fea0895c293d79c271eae977b8 Jan 30 17:12:31 crc kubenswrapper[4712]: I0130 17:12:31.578812 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw"] Jan 30 17:12:31 crc kubenswrapper[4712]: W0130 17:12:31.625927 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15028a9a_8618_4d65_89ff_d8b06f63821f.slice/crio-35fdee30e5bfc79e9100c44d1ecaea06c4533a0d2a156e3251d3955152d5bc18 WatchSource:0}: Error finding container 35fdee30e5bfc79e9100c44d1ecaea06c4533a0d2a156e3251d3955152d5bc18: Status 404 returned error can't find the container with id 35fdee30e5bfc79e9100c44d1ecaea06c4533a0d2a156e3251d3955152d5bc18 Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.091436 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" event={"ID":"19489158-a72e-4e6d-981a-879b596fb9b8","Type":"ContainerStarted","Data":"034e90c3ff30d4107cf717ef927fc8f735e4ff11c4a1eb6f287a8711c464cc9f"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.092659 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.095072 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" event={"ID":"70ad565b-dc4e-4f67-863a-fd29c88ad39d","Type":"ContainerStarted","Data":"171404c59c6351bc6a8c3f706193ba075661e69af149946dfda018e5ae5601cd"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.095446 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.098403 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" event={"ID":"cc62b7c7-5521-41df-bf10-d9cc287fbf7f","Type":"ContainerStarted","Data":"23a6bb0d1e18e605691a1101f1c93ced34c3b9ae5953132ba90372ab1c61cd35"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.098540 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.100826 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" event={"ID":"d4821c16-36e6-43c6-91f1-5fdf29b5b88a","Type":"ContainerStarted","Data":"260cdb1246676f00db73a3565d30ac498f0dedfb966985110009dea2e431e840"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.102547 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" event={"ID":"adbd0e89-e0e3-46eb-b2c5-4482cc71deae","Type":"ContainerStarted","Data":"c6d13b02ecc9383dc0b23a2fde1a6bc8da3cbd70d5d0676e13f9303871894af7"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.103175 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.104450 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" event={"ID":"5ccbb7b6-e489-4676-8faa-8a0306776a54","Type":"ContainerStarted","Data":"967831ab70816dce3798e6a954984f5bc2330cb6081b6ead9093334b7ef492b2"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.104784 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.107712 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" event={"ID":"957cefd9-5116-40c3-aaf4-67ba58319ca1","Type":"ContainerStarted","Data":"9e7deded23dc1d839b5a701f3abcc50b25d9fb2f05a3b8d5c4e967b6ac2d9aee"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.107998 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.109165 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" event={"ID":"6e263552-c0f6-4f24-879f-79895cdbc953","Type":"ContainerStarted","Data":"c98751179904f52a702abcc444081c9118c40e6caa45358aa27ff3f4c1b67d8f"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.109448 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.115230 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" podStartSLOduration=3.812689885 podStartE2EDuration="47.115215002s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.659409595 +0000 UTC m=+1044.566419064" lastFinishedPulling="2026-01-30 17:12:30.961934712 +0000 UTC m=+1087.868944181" observedRunningTime="2026-01-30 17:12:32.114869154 +0000 UTC m=+1089.021878633" watchObservedRunningTime="2026-01-30 17:12:32.115215002 +0000 UTC m=+1089.022224471" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.115582 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" event={"ID":"3bfc9890-11b6-4fcf-9458-08dce816b4b9","Type":"ContainerStarted","Data":"4773c1b2a3a839e03169687b56c6c54a2e3d59c4b03e956662a45e6f098b3491"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.116179 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.117897 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" event={"ID":"e1a1d497-2276-4248-9bca-1c7038430933","Type":"ContainerStarted","Data":"34a72b2f4ec29fae3458f05589323593a23284cee7cf685c6622f2c43f295065"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.118340 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.119984 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" event={"ID":"aa03f8a3-9bea-4b56-92ce-27d1fe53840a","Type":"ContainerStarted","Data":"dea1fecdb66e8df6fca0231076091e1b5b2cfc89df3626297b8ff9197bee9e9a"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.120320 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.124708 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" event={"ID":"7b99459b-9311-4260-be34-3de859c1e0b0","Type":"ContainerStarted","Data":"d1daafd55175405472608d792aae4026346e43fea0895c293d79c271eae977b8"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.133239 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" event={"ID":"d37f95a0-af87-4727-83a4-aa6334b0759e","Type":"ContainerStarted","Data":"67e7928d9a2c87e3ecde651f4637e809fbae0de60f858391ff18ba6ea744ceb3"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.133533 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.151205 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" event={"ID":"2bc54d51-4f21-479f-a89e-1c60a757433f","Type":"ContainerStarted","Data":"dd4dca41151af969921728b145b6e5a93950d0746bbb61f4a370a82bbfd63709"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.151920 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.166692 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" event={"ID":"15028a9a-8618-4d65-89ff-d8b06f63821f","Type":"ContainerStarted","Data":"a2489e7e0700d10680b9012b98e6a4bf8ffe449d8ee8aebdded1c793f59a2c4d"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.166748 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" event={"ID":"15028a9a-8618-4d65-89ff-d8b06f63821f","Type":"ContainerStarted","Data":"35fdee30e5bfc79e9100c44d1ecaea06c4533a0d2a156e3251d3955152d5bc18"} Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.167504 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.204421 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" podStartSLOduration=17.11882755 podStartE2EDuration="48.204400476s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.448688922 +0000 UTC m=+1044.355698391" lastFinishedPulling="2026-01-30 17:12:18.534261848 +0000 UTC m=+1075.441271317" observedRunningTime="2026-01-30 17:12:32.146097781 +0000 UTC m=+1089.053107250" watchObservedRunningTime="2026-01-30 17:12:32.204400476 +0000 UTC m=+1089.111409945" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.230042 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" podStartSLOduration=4.37327489 podStartE2EDuration="48.23001936s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.310177877 +0000 UTC m=+1044.217187346" lastFinishedPulling="2026-01-30 17:12:31.166922347 +0000 UTC m=+1088.073931816" observedRunningTime="2026-01-30 17:12:32.201708442 +0000 UTC m=+1089.108717931" watchObservedRunningTime="2026-01-30 17:12:32.23001936 +0000 UTC m=+1089.137028829" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.233837 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" podStartSLOduration=3.649735444 podStartE2EDuration="48.23382023s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:46.729079401 +0000 UTC m=+1043.636088880" lastFinishedPulling="2026-01-30 17:12:31.313164197 +0000 UTC m=+1088.220173666" observedRunningTime="2026-01-30 17:12:32.225287826 +0000 UTC m=+1089.132297315" watchObservedRunningTime="2026-01-30 17:12:32.23382023 +0000 UTC m=+1089.140829699" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.252832 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" podStartSLOduration=4.338021077 podStartE2EDuration="48.252813345s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.285459307 +0000 UTC m=+1044.192468776" lastFinishedPulling="2026-01-30 17:12:31.200251575 +0000 UTC m=+1088.107261044" observedRunningTime="2026-01-30 17:12:32.249761822 +0000 UTC m=+1089.156771291" watchObservedRunningTime="2026-01-30 17:12:32.252813345 +0000 UTC m=+1089.159822814" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.305086 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" podStartSLOduration=3.973037303 podStartE2EDuration="47.305069755s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.630107084 +0000 UTC m=+1044.537116553" lastFinishedPulling="2026-01-30 17:12:30.962139536 +0000 UTC m=+1087.869149005" observedRunningTime="2026-01-30 17:12:32.300215519 +0000 UTC m=+1089.207224988" watchObservedRunningTime="2026-01-30 17:12:32.305069755 +0000 UTC m=+1089.212079224" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.460561 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" podStartSLOduration=3.631789176 podStartE2EDuration="47.460541946s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.385974202 +0000 UTC m=+1044.292983671" lastFinishedPulling="2026-01-30 17:12:31.214726972 +0000 UTC m=+1088.121736441" observedRunningTime="2026-01-30 17:12:32.398149243 +0000 UTC m=+1089.305158712" watchObservedRunningTime="2026-01-30 17:12:32.460541946 +0000 UTC m=+1089.367551415" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.461135 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" podStartSLOduration=4.023429548 podStartE2EDuration="48.46112925s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:46.729140913 +0000 UTC m=+1043.636150382" lastFinishedPulling="2026-01-30 17:12:31.166840615 +0000 UTC m=+1088.073850084" observedRunningTime="2026-01-30 17:12:32.45987364 +0000 UTC m=+1089.366883109" watchObservedRunningTime="2026-01-30 17:12:32.46112925 +0000 UTC m=+1089.368138719" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.547870 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" podStartSLOduration=47.547850156 podStartE2EDuration="47.547850156s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:12:32.540573912 +0000 UTC m=+1089.447583381" watchObservedRunningTime="2026-01-30 17:12:32.547850156 +0000 UTC m=+1089.454859625" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.579113 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" podStartSLOduration=4.655128874 podStartE2EDuration="48.579096633s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.391060443 +0000 UTC m=+1044.298069912" lastFinishedPulling="2026-01-30 17:12:31.315028202 +0000 UTC m=+1088.222037671" observedRunningTime="2026-01-30 17:12:32.576708946 +0000 UTC m=+1089.483718415" watchObservedRunningTime="2026-01-30 17:12:32.579096633 +0000 UTC m=+1089.486106092" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.608761 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" podStartSLOduration=17.379904149 podStartE2EDuration="48.608738472s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:46.451811106 +0000 UTC m=+1043.358820565" lastFinishedPulling="2026-01-30 17:12:17.680645419 +0000 UTC m=+1074.587654888" observedRunningTime="2026-01-30 17:12:32.604050641 +0000 UTC m=+1089.511060110" watchObservedRunningTime="2026-01-30 17:12:32.608738472 +0000 UTC m=+1089.515747941" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.666220 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" podStartSLOduration=17.676719773 podStartE2EDuration="48.666204018s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:46.691160294 +0000 UTC m=+1043.598169763" lastFinishedPulling="2026-01-30 17:12:17.680644539 +0000 UTC m=+1074.587654008" observedRunningTime="2026-01-30 17:12:32.626130409 +0000 UTC m=+1089.533139878" watchObservedRunningTime="2026-01-30 17:12:32.666204018 +0000 UTC m=+1089.573213487" Jan 30 17:12:32 crc kubenswrapper[4712]: I0130 17:12:32.667613 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" podStartSLOduration=3.872099106 podStartE2EDuration="47.667607421s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.401753839 +0000 UTC m=+1044.308763308" lastFinishedPulling="2026-01-30 17:12:31.197262154 +0000 UTC m=+1088.104271623" observedRunningTime="2026-01-30 17:12:32.665576303 +0000 UTC m=+1089.572585782" watchObservedRunningTime="2026-01-30 17:12:32.667607421 +0000 UTC m=+1089.574616890" Jan 30 17:12:34 crc kubenswrapper[4712]: I0130 17:12:34.182249 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" event={"ID":"b3222b74-686d-4b44-b521-33fb24c0b403","Type":"ContainerStarted","Data":"d8f0e9151ec4703f658ddf10a03713d02ed3ebfe4a8f79c437ec16a535e96b0e"} Jan 30 17:12:34 crc kubenswrapper[4712]: I0130 17:12:34.183682 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:12:34 crc kubenswrapper[4712]: I0130 17:12:34.213203 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podStartSLOduration=5.11670275 podStartE2EDuration="50.213189479s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.171590691 +0000 UTC m=+1044.078600160" lastFinishedPulling="2026-01-30 17:12:32.26807742 +0000 UTC m=+1089.175086889" observedRunningTime="2026-01-30 17:12:34.209670096 +0000 UTC m=+1091.116679575" watchObservedRunningTime="2026-01-30 17:12:34.213189479 +0000 UTC m=+1091.120198948" Jan 30 17:12:35 crc kubenswrapper[4712]: I0130 17:12:35.198245 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" event={"ID":"c8354464-6e92-4961-833a-414efe43db13","Type":"ContainerStarted","Data":"14a8307711d5531071a0ec56a55db6b76896d69ff1cf8272d8c0499b0e565947"} Jan 30 17:12:35 crc kubenswrapper[4712]: I0130 17:12:35.198898 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:12:35 crc kubenswrapper[4712]: I0130 17:12:35.823292 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" podStartSLOduration=4.69135498 podStartE2EDuration="51.823269851s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.415371204 +0000 UTC m=+1044.322380673" lastFinishedPulling="2026-01-30 17:12:34.547286075 +0000 UTC m=+1091.454295544" observedRunningTime="2026-01-30 17:12:35.215184589 +0000 UTC m=+1092.122194058" watchObservedRunningTime="2026-01-30 17:12:35.823269851 +0000 UTC m=+1092.730279320" Jan 30 17:12:37 crc kubenswrapper[4712]: E0130 17:12:37.801388 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" Jan 30 17:12:37 crc kubenswrapper[4712]: E0130 17:12:37.801785 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.227843 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" event={"ID":"7b99459b-9311-4260-be34-3de859c1e0b0","Type":"ContainerStarted","Data":"4bc73148b0ae031a40f0225989ad64943edc469e8ab2ed0161f0bfd8d056cdc4"} Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.227974 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.228966 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" event={"ID":"1abbe42a-dbb1-4ec5-8318-451adc608b2b","Type":"ContainerStarted","Data":"600fec3f58d6d695d0f1c198b520d9f80a9fe27bacd37156825e2f979df597cc"} Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.229183 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.231530 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" event={"ID":"d4821c16-36e6-43c6-91f1-5fdf29b5b88a","Type":"ContainerStarted","Data":"3012fa578cee4cd64c7dcdde44e9811e0b1680c50bed727e9b799803fd0620de"} Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.244572 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" podStartSLOduration=48.607127534 podStartE2EDuration="54.244558146s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:12:31.470940224 +0000 UTC m=+1088.377949693" lastFinishedPulling="2026-01-30 17:12:37.108370846 +0000 UTC m=+1094.015380305" observedRunningTime="2026-01-30 17:12:38.241842611 +0000 UTC m=+1095.148852080" watchObservedRunningTime="2026-01-30 17:12:38.244558146 +0000 UTC m=+1095.151567615" Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.296215 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" podStartSLOduration=47.558756015 podStartE2EDuration="53.296198152s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:12:31.372384384 +0000 UTC m=+1088.279393853" lastFinishedPulling="2026-01-30 17:12:37.109826521 +0000 UTC m=+1094.016835990" observedRunningTime="2026-01-30 17:12:38.276075291 +0000 UTC m=+1095.183084760" watchObservedRunningTime="2026-01-30 17:12:38.296198152 +0000 UTC m=+1095.203207621" Jan 30 17:12:38 crc kubenswrapper[4712]: E0130 17:12:38.800892 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" Jan 30 17:12:38 crc kubenswrapper[4712]: I0130 17:12:38.817435 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" podStartSLOduration=4.073962678 podStartE2EDuration="53.817417007s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.365762358 +0000 UTC m=+1044.272771827" lastFinishedPulling="2026-01-30 17:12:37.109216697 +0000 UTC m=+1094.016226156" observedRunningTime="2026-01-30 17:12:38.298570769 +0000 UTC m=+1095.205580238" watchObservedRunningTime="2026-01-30 17:12:38.817417007 +0000 UTC m=+1095.724426476" Jan 30 17:12:39 crc kubenswrapper[4712]: I0130 17:12:39.240544 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" event={"ID":"d3b1d20e-d20c-40f9-9c2b-314aee2fe51e","Type":"ContainerStarted","Data":"02be382aaf55f95e905ce631549a75b85ec947b0dc2f6807535ea294a6d2aae8"} Jan 30 17:12:39 crc kubenswrapper[4712]: I0130 17:12:39.240881 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:12:39 crc kubenswrapper[4712]: I0130 17:12:39.240983 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:12:39 crc kubenswrapper[4712]: I0130 17:12:39.261705 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podStartSLOduration=4.390768897 podStartE2EDuration="55.261685018s" podCreationTimestamp="2026-01-30 17:11:44 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.429326508 +0000 UTC m=+1044.336335977" lastFinishedPulling="2026-01-30 17:12:38.300242629 +0000 UTC m=+1095.207252098" observedRunningTime="2026-01-30 17:12:39.257237432 +0000 UTC m=+1096.164246911" watchObservedRunningTime="2026-01-30 17:12:39.261685018 +0000 UTC m=+1096.168694487" Jan 30 17:12:41 crc kubenswrapper[4712]: I0130 17:12:41.875047 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" Jan 30 17:12:44 crc kubenswrapper[4712]: E0130 17:12:44.802043 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" podUID="3602a87a-8a49-427b-baf0-a534b10e2d5b" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.103738 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.199699 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.200199 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-2h4zg" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.200443 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.249970 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.367496 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.428246 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.495709 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.495816 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.535647 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.542515 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.586712 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.625638 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.909639 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.929812 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" Jan 30 17:12:45 crc kubenswrapper[4712]: I0130 17:12:45.959488 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" Jan 30 17:12:50 crc kubenswrapper[4712]: I0130 17:12:50.802763 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:12:50 crc kubenswrapper[4712]: I0130 17:12:50.932222 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" Jan 30 17:12:51 crc kubenswrapper[4712]: I0130 17:12:51.287833 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" Jan 30 17:12:52 crc kubenswrapper[4712]: I0130 17:12:52.347555 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" event={"ID":"f0e6edc2-9ad5-44a9-8737-78cfd077f9b1","Type":"ContainerStarted","Data":"dd0b8e5bfdf1b2a65b6f53c44ed011598adc6a03db36a0c80ac243726495efe9"} Jan 30 17:12:52 crc kubenswrapper[4712]: I0130 17:12:52.348086 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:12:52 crc kubenswrapper[4712]: I0130 17:12:52.364737 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podStartSLOduration=3.369957299 podStartE2EDuration="1m7.364720553s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.665261585 +0000 UTC m=+1044.572271054" lastFinishedPulling="2026-01-30 17:12:51.660024839 +0000 UTC m=+1108.567034308" observedRunningTime="2026-01-30 17:12:52.362236163 +0000 UTC m=+1109.269245642" watchObservedRunningTime="2026-01-30 17:12:52.364720553 +0000 UTC m=+1109.271730012" Jan 30 17:12:53 crc kubenswrapper[4712]: I0130 17:12:53.354502 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" event={"ID":"a1f37d35-d806-4c98-bdc5-85163d1b180c","Type":"ContainerStarted","Data":"9b426c6b54d2b04f61d8256499c16f93f839666df46f50e14f55849e1866b934"} Jan 30 17:12:53 crc kubenswrapper[4712]: I0130 17:12:53.355110 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:12:53 crc kubenswrapper[4712]: I0130 17:12:53.377876 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podStartSLOduration=3.27428793 podStartE2EDuration="1m8.37785923s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.664580788 +0000 UTC m=+1044.571590257" lastFinishedPulling="2026-01-30 17:12:52.768152088 +0000 UTC m=+1109.675161557" observedRunningTime="2026-01-30 17:12:53.376341183 +0000 UTC m=+1110.283350652" watchObservedRunningTime="2026-01-30 17:12:53.37785923 +0000 UTC m=+1110.284868699" Jan 30 17:12:54 crc kubenswrapper[4712]: I0130 17:12:54.366028 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" event={"ID":"6c041737-6e32-468d-aba7-469207eab526","Type":"ContainerStarted","Data":"08d02e4c41eae6833ef9c60d71162df9a74d8cef399f3247844635200d205a93"} Jan 30 17:12:54 crc kubenswrapper[4712]: I0130 17:12:54.366282 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:12:54 crc kubenswrapper[4712]: I0130 17:12:54.384407 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podStartSLOduration=3.640354211 podStartE2EDuration="1m9.384391267s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.659436816 +0000 UTC m=+1044.566446285" lastFinishedPulling="2026-01-30 17:12:53.403473872 +0000 UTC m=+1110.310483341" observedRunningTime="2026-01-30 17:12:54.379116871 +0000 UTC m=+1111.286126340" watchObservedRunningTime="2026-01-30 17:12:54.384391267 +0000 UTC m=+1111.291400726" Jan 30 17:12:57 crc kubenswrapper[4712]: I0130 17:12:57.386746 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" event={"ID":"3602a87a-8a49-427b-baf0-a534b10e2d5b","Type":"ContainerStarted","Data":"c640888edaabd3574d8c02450d6265f27d5fe3997e68bb07e0e541f1f5e22c60"} Jan 30 17:12:57 crc kubenswrapper[4712]: I0130 17:12:57.424363 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7xzbw" podStartSLOduration=3.870320764 podStartE2EDuration="1m12.424295687s" podCreationTimestamp="2026-01-30 17:11:45 +0000 UTC" firstStartedPulling="2026-01-30 17:11:47.674044535 +0000 UTC m=+1044.581054004" lastFinishedPulling="2026-01-30 17:12:56.228019438 +0000 UTC m=+1113.135028927" observedRunningTime="2026-01-30 17:12:57.404123984 +0000 UTC m=+1114.311133473" watchObservedRunningTime="2026-01-30 17:12:57.424295687 +0000 UTC m=+1114.331305166" Jan 30 17:13:05 crc kubenswrapper[4712]: I0130 17:13:05.913674 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" Jan 30 17:13:05 crc kubenswrapper[4712]: I0130 17:13:05.924277 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" Jan 30 17:13:06 crc kubenswrapper[4712]: I0130 17:13:06.025329 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.264607 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gdf88"] Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.281729 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.286603 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-pfjhc" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.286928 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.287128 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.296214 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk8pr\" (UniqueName: \"kubernetes.io/projected/60a618eb-268d-4b06-bd4a-3365bffb6a69-kube-api-access-sk8pr\") pod \"dnsmasq-dns-675f4bcbfc-gdf88\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.296499 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.296512 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a618eb-268d-4b06-bd4a-3365bffb6a69-config\") pod \"dnsmasq-dns-675f4bcbfc-gdf88\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.306680 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gdf88"] Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.394257 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-28pr4"] Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.395537 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.398176 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk8pr\" (UniqueName: \"kubernetes.io/projected/60a618eb-268d-4b06-bd4a-3365bffb6a69-kube-api-access-sk8pr\") pod \"dnsmasq-dns-675f4bcbfc-gdf88\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.398252 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a618eb-268d-4b06-bd4a-3365bffb6a69-config\") pod \"dnsmasq-dns-675f4bcbfc-gdf88\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.399364 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a618eb-268d-4b06-bd4a-3365bffb6a69-config\") pod \"dnsmasq-dns-675f4bcbfc-gdf88\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.405089 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.438285 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-28pr4"] Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.441748 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk8pr\" (UniqueName: \"kubernetes.io/projected/60a618eb-268d-4b06-bd4a-3365bffb6a69-kube-api-access-sk8pr\") pod \"dnsmasq-dns-675f4bcbfc-gdf88\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.500027 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-config\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.500067 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.500114 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkdb5\" (UniqueName: \"kubernetes.io/projected/dd9423d9-2a7c-4894-8c85-007ebf09a364-kube-api-access-mkdb5\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.604701 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.604751 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-config\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.604823 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkdb5\" (UniqueName: \"kubernetes.io/projected/dd9423d9-2a7c-4894-8c85-007ebf09a364-kube-api-access-mkdb5\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.606057 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.606118 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-config\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.615741 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.631937 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkdb5\" (UniqueName: \"kubernetes.io/projected/dd9423d9-2a7c-4894-8c85-007ebf09a364-kube-api-access-mkdb5\") pod \"dnsmasq-dns-78dd6ddcc-28pr4\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:21 crc kubenswrapper[4712]: I0130 17:13:21.725722 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:22 crc kubenswrapper[4712]: I0130 17:13:22.186465 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gdf88"] Jan 30 17:13:22 crc kubenswrapper[4712]: I0130 17:13:22.282459 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-28pr4"] Jan 30 17:13:22 crc kubenswrapper[4712]: W0130 17:13:22.284400 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddd9423d9_2a7c_4894_8c85_007ebf09a364.slice/crio-03cb48fe18594b8ab254eef081d0c7f4b6ea32d35f5b1990a5f3b28a59f31d18 WatchSource:0}: Error finding container 03cb48fe18594b8ab254eef081d0c7f4b6ea32d35f5b1990a5f3b28a59f31d18: Status 404 returned error can't find the container with id 03cb48fe18594b8ab254eef081d0c7f4b6ea32d35f5b1990a5f3b28a59f31d18 Jan 30 17:13:22 crc kubenswrapper[4712]: I0130 17:13:22.578000 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" event={"ID":"dd9423d9-2a7c-4894-8c85-007ebf09a364","Type":"ContainerStarted","Data":"03cb48fe18594b8ab254eef081d0c7f4b6ea32d35f5b1990a5f3b28a59f31d18"} Jan 30 17:13:22 crc kubenswrapper[4712]: I0130 17:13:22.579137 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" event={"ID":"60a618eb-268d-4b06-bd4a-3365bffb6a69","Type":"ContainerStarted","Data":"d4094b6ee93b73f1ab1624961d47aed7476d5023b6ea450be6f1021074033561"} Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.216765 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gdf88"] Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.255321 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mcl9p"] Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.262301 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.285550 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mcl9p"] Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.361740 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcqtk\" (UniqueName: \"kubernetes.io/projected/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-kube-api-access-gcqtk\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.376420 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-config\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.376495 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.478249 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcqtk\" (UniqueName: \"kubernetes.io/projected/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-kube-api-access-gcqtk\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.478308 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-config\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.478330 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.479439 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-dns-svc\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.479720 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-config\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.506039 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcqtk\" (UniqueName: \"kubernetes.io/projected/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-kube-api-access-gcqtk\") pod \"dnsmasq-dns-666b6646f7-mcl9p\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.588512 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.653542 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-28pr4"] Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.712165 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hkhst"] Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.713402 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.723949 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hkhst"] Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.797484 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpknx\" (UniqueName: \"kubernetes.io/projected/0456e317-8ed6-456a-ba12-c46dc30f11a3-kube-api-access-bpknx\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.797870 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.797946 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-config\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.899302 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-config\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.899390 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpknx\" (UniqueName: \"kubernetes.io/projected/0456e317-8ed6-456a-ba12-c46dc30f11a3-kube-api-access-bpknx\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.899447 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.901022 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.901785 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-config\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:24 crc kubenswrapper[4712]: I0130 17:13:24.947766 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpknx\" (UniqueName: \"kubernetes.io/projected/0456e317-8ed6-456a-ba12-c46dc30f11a3-kube-api-access-bpknx\") pod \"dnsmasq-dns-57d769cc4f-hkhst\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.044633 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.255390 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mcl9p"] Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.468689 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.470990 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.474283 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.474410 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.474340 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.474622 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.482482 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.482553 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hdm8z" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.489367 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.491919 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507560 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507628 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507654 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507672 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507688 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507706 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507743 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01b5b85b-caea-4f70-a61f-875ed30f9e64-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507760 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwkbg\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-kube-api-access-kwkbg\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.507784 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.508008 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-config-data\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.508053 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01b5b85b-caea-4f70-a61f-875ed30f9e64-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608780 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608852 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608872 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608890 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608912 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608946 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01b5b85b-caea-4f70-a61f-875ed30f9e64-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608967 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwkbg\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-kube-api-access-kwkbg\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.608993 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.609009 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-config-data\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.609057 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01b5b85b-caea-4f70-a61f-875ed30f9e64-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.609105 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.609249 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.609941 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.610242 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.610517 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.610746 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-config-data\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.620014 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.622650 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.626937 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hkhst"] Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.636379 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01b5b85b-caea-4f70-a61f-875ed30f9e64-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.637652 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.649715 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwkbg\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-kube-api-access-kwkbg\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.649758 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01b5b85b-caea-4f70-a61f-875ed30f9e64-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.655698 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.661983 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" event={"ID":"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758","Type":"ContainerStarted","Data":"3504f1664d2ebd1176e2e7a8e8defb8193be2fc5798f86fc3f9bb83f4f89eaf5"} Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.818727 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.842564 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.847070 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.850401 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.850581 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.850823 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.851644 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.851986 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.852136 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.852185 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 17:13:25 crc kubenswrapper[4712]: I0130 17:13:25.852339 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rj892" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015689 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015748 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015774 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d5b67399-3a53-4694-8f1c-c04592426dcd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015816 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015832 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015852 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d5b67399-3a53-4694-8f1c-c04592426dcd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015872 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015902 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015962 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.015990 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.016016 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vl69k\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-kube-api-access-vl69k\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.117636 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.118028 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d5b67399-3a53-4694-8f1c-c04592426dcd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.118083 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.118110 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.118142 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d5b67399-3a53-4694-8f1c-c04592426dcd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.118172 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.119137 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.119191 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.119244 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.119278 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vl69k\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-kube-api-access-vl69k\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.119330 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.120203 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.120275 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.121406 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.121607 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.124283 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.124546 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d5b67399-3a53-4694-8f1c-c04592426dcd-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.126905 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d5b67399-3a53-4694-8f1c-c04592426dcd-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.130768 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.130859 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.138060 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.150014 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.174684 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vl69k\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-kube-api-access-vl69k\") pod \"rabbitmq-cell1-server-0\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.219109 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.468541 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:13:26 crc kubenswrapper[4712]: W0130 17:13:26.476389 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01b5b85b_caea_4f70_a61f_875ed30f9e64.slice/crio-21010972aa5303b9a366e69c6e6e1728053fded5bbf267f87481311791f0248d WatchSource:0}: Error finding container 21010972aa5303b9a366e69c6e6e1728053fded5bbf267f87481311791f0248d: Status 404 returned error can't find the container with id 21010972aa5303b9a366e69c6e6e1728053fded5bbf267f87481311791f0248d Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.703187 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" event={"ID":"0456e317-8ed6-456a-ba12-c46dc30f11a3","Type":"ContainerStarted","Data":"f0f51bfa901a43270a0d6dc031dfd3150f47eea343d29cf45c019e9090e60a7c"} Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.705486 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01b5b85b-caea-4f70-a61f-875ed30f9e64","Type":"ContainerStarted","Data":"21010972aa5303b9a366e69c6e6e1728053fded5bbf267f87481311791f0248d"} Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.713303 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.730421 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.738488 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.742353 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.748176 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.748462 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-mw7fl" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.751449 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.765935 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.809740 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842276 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842351 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842413 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842440 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842463 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842497 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842532 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5fnb\" (UniqueName: \"kubernetes.io/projected/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-kube-api-access-l5fnb\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.842588 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944132 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944235 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944260 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944286 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944313 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944330 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944358 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.944377 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5fnb\" (UniqueName: \"kubernetes.io/projected/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-kube-api-access-l5fnb\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.947126 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-config-data-generated\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.948233 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.951065 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-config-data-default\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.952357 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-kolla-config\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.963759 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.975607 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-operator-scripts\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.986468 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.987970 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5fnb\" (UniqueName: \"kubernetes.io/projected/a12f0a95-1db0-4dd9-993c-1413c0fa10b0-kube-api-access-l5fnb\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:26 crc kubenswrapper[4712]: I0130 17:13:26.998199 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"openstack-galera-0\" (UID: \"a12f0a95-1db0-4dd9-993c-1413c0fa10b0\") " pod="openstack/openstack-galera-0" Jan 30 17:13:27 crc kubenswrapper[4712]: I0130 17:13:27.119204 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 30 17:13:27 crc kubenswrapper[4712]: I0130 17:13:27.723615 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d5b67399-3a53-4694-8f1c-c04592426dcd","Type":"ContainerStarted","Data":"dc3b4d3cd874796ccf961be5cb1179023d612a40c798e1eea1488a66b4d39742"} Jan 30 17:13:27 crc kubenswrapper[4712]: I0130 17:13:27.834044 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 30 17:13:27 crc kubenswrapper[4712]: W0130 17:13:27.913398 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda12f0a95_1db0_4dd9_993c_1413c0fa10b0.slice/crio-1a32297947bc2232fcf0dac9b514649d8a1d3d5f6f7226243a8174251bd0f5fc WatchSource:0}: Error finding container 1a32297947bc2232fcf0dac9b514649d8a1d3d5f6f7226243a8174251bd0f5fc: Status 404 returned error can't find the container with id 1a32297947bc2232fcf0dac9b514649d8a1d3d5f6f7226243a8174251bd0f5fc Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.245186 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.246566 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.249833 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.249891 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-dhtgj" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.249839 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.253457 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.271688 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.383898 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.387789 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.387835 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.387879 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmnj5\" (UniqueName: \"kubernetes.io/projected/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-kube-api-access-rmnj5\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.387906 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.387926 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.388092 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.388150 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489039 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489095 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489149 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489169 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489190 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489218 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmnj5\" (UniqueName: \"kubernetes.io/projected/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-kube-api-access-rmnj5\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489239 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.489260 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.492582 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.492850 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.492973 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.494682 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.495124 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.501442 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.516629 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.559620 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.561227 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.571019 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmnj5\" (UniqueName: \"kubernetes.io/projected/e0e4667e-8702-43ae-b7b7-1aa930f9a3c3-kube-api-access-rmnj5\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.571416 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.571571 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-mktmt" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.574730 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.580460 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.692989 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"openstack-cell1-galera-0\" (UID: \"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3\") " pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.696708 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fecd346-f2cb-45fa-be64-6be579acaf56-config-data\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.698220 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fecd346-f2cb-45fa-be64-6be579acaf56-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.698457 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9fecd346-f2cb-45fa-be64-6be579acaf56-kolla-config\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.698997 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smv47\" (UniqueName: \"kubernetes.io/projected/9fecd346-f2cb-45fa-be64-6be579acaf56-kube-api-access-smv47\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.699318 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fecd346-f2cb-45fa-be64-6be579acaf56-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.774314 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a12f0a95-1db0-4dd9-993c-1413c0fa10b0","Type":"ContainerStarted","Data":"1a32297947bc2232fcf0dac9b514649d8a1d3d5f6f7226243a8174251bd0f5fc"} Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.805896 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fecd346-f2cb-45fa-be64-6be579acaf56-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.805979 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9fecd346-f2cb-45fa-be64-6be579acaf56-kolla-config\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.806092 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smv47\" (UniqueName: \"kubernetes.io/projected/9fecd346-f2cb-45fa-be64-6be579acaf56-kube-api-access-smv47\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.806138 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fecd346-f2cb-45fa-be64-6be579acaf56-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.806297 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fecd346-f2cb-45fa-be64-6be579acaf56-config-data\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.808739 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9fecd346-f2cb-45fa-be64-6be579acaf56-kolla-config\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.813562 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fecd346-f2cb-45fa-be64-6be579acaf56-memcached-tls-certs\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.814086 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9fecd346-f2cb-45fa-be64-6be579acaf56-config-data\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.817950 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fecd346-f2cb-45fa-be64-6be579acaf56-combined-ca-bundle\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.849659 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smv47\" (UniqueName: \"kubernetes.io/projected/9fecd346-f2cb-45fa-be64-6be579acaf56-kube-api-access-smv47\") pod \"memcached-0\" (UID: \"9fecd346-f2cb-45fa-be64-6be579acaf56\") " pod="openstack/memcached-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.919162 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 30 17:13:28 crc kubenswrapper[4712]: I0130 17:13:28.938080 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.168819 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 30 17:13:30 crc kubenswrapper[4712]: W0130 17:13:30.221304 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fecd346_f2cb_45fa_be64_6be579acaf56.slice/crio-452fdb9e3a908787eb6c0dc4aa57083efb127cc990fd00939d60c05a071ff405 WatchSource:0}: Error finding container 452fdb9e3a908787eb6c0dc4aa57083efb127cc990fd00939d60c05a071ff405: Status 404 returned error can't find the container with id 452fdb9e3a908787eb6c0dc4aa57083efb127cc990fd00939d60c05a071ff405 Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.236954 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 30 17:13:30 crc kubenswrapper[4712]: W0130 17:13:30.298134 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode0e4667e_8702_43ae_b7b7_1aa930f9a3c3.slice/crio-81dcddee6c709025c2f69591c0ec555079842ff6fe535ab71e33fd47a43f230e WatchSource:0}: Error finding container 81dcddee6c709025c2f69591c0ec555079842ff6fe535ab71e33fd47a43f230e: Status 404 returned error can't find the container with id 81dcddee6c709025c2f69591c0ec555079842ff6fe535ab71e33fd47a43f230e Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.841210 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3","Type":"ContainerStarted","Data":"81dcddee6c709025c2f69591c0ec555079842ff6fe535ab71e33fd47a43f230e"} Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.860183 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9fecd346-f2cb-45fa-be64-6be579acaf56","Type":"ContainerStarted","Data":"452fdb9e3a908787eb6c0dc4aa57083efb127cc990fd00939d60c05a071ff405"} Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.864971 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.866878 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.871072 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-pfpmc" Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.888171 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:13:30 crc kubenswrapper[4712]: I0130 17:13:30.972164 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxzzd\" (UniqueName: \"kubernetes.io/projected/e88ea344-4eb8-4174-9ce7-855aa6afed59-kube-api-access-gxzzd\") pod \"kube-state-metrics-0\" (UID: \"e88ea344-4eb8-4174-9ce7-855aa6afed59\") " pod="openstack/kube-state-metrics-0" Jan 30 17:13:31 crc kubenswrapper[4712]: I0130 17:13:31.074100 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxzzd\" (UniqueName: \"kubernetes.io/projected/e88ea344-4eb8-4174-9ce7-855aa6afed59-kube-api-access-gxzzd\") pod \"kube-state-metrics-0\" (UID: \"e88ea344-4eb8-4174-9ce7-855aa6afed59\") " pod="openstack/kube-state-metrics-0" Jan 30 17:13:31 crc kubenswrapper[4712]: I0130 17:13:31.097340 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxzzd\" (UniqueName: \"kubernetes.io/projected/e88ea344-4eb8-4174-9ce7-855aa6afed59-kube-api-access-gxzzd\") pod \"kube-state-metrics-0\" (UID: \"e88ea344-4eb8-4174-9ce7-855aa6afed59\") " pod="openstack/kube-state-metrics-0" Jan 30 17:13:31 crc kubenswrapper[4712]: I0130 17:13:31.264595 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 17:13:32 crc kubenswrapper[4712]: I0130 17:13:32.029551 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:13:32 crc kubenswrapper[4712]: I0130 17:13:32.890325 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e88ea344-4eb8-4174-9ce7-855aa6afed59","Type":"ContainerStarted","Data":"7a83ab6ca44a66e42f2ead6aa92abb7b4e0ccabf0c870763bda352e356a12d03"} Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.315663 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sr5tj"] Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.320777 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.331889 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-qfgk4"] Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.333711 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.345081 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-zrfxf" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.345468 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.352404 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.367121 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sr5tj"] Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.389484 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qfgk4"] Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445394 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-log\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445434 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-run\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445471 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmk5c\" (UniqueName: \"kubernetes.io/projected/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-kube-api-access-fmk5c\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445518 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-ovn-controller-tls-certs\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445547 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36067e45-f8de-4952-9372-564e0e9d850e-scripts\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445618 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-lib\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445640 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-combined-ca-bundle\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445662 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-run-ovn\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445689 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-scripts\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.445719 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz8mq\" (UniqueName: \"kubernetes.io/projected/36067e45-f8de-4952-9372-564e0e9d850e-kube-api-access-xz8mq\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.446308 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-etc-ovs\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.446335 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-log-ovn\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.446396 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-run\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548305 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz8mq\" (UniqueName: \"kubernetes.io/projected/36067e45-f8de-4952-9372-564e0e9d850e-kube-api-access-xz8mq\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548355 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-etc-ovs\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548386 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-log-ovn\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548453 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-run\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548487 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-log\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548504 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-run\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548529 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmk5c\" (UniqueName: \"kubernetes.io/projected/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-kube-api-access-fmk5c\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548564 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-ovn-controller-tls-certs\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548583 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36067e45-f8de-4952-9372-564e0e9d850e-scripts\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548614 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-lib\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548633 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-combined-ca-bundle\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548661 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-run-ovn\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.548682 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-scripts\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.550842 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-scripts\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.551883 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-etc-ovs\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.552064 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-log-ovn\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.552198 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-run\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.552298 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-log\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.552358 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-run\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.554419 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/36067e45-f8de-4952-9372-564e0e9d850e-var-lib\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.557075 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/36067e45-f8de-4952-9372-564e0e9d850e-scripts\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.557761 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-var-run-ovn\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.567359 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-ovn-controller-tls-certs\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.567488 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz8mq\" (UniqueName: \"kubernetes.io/projected/36067e45-f8de-4952-9372-564e0e9d850e-kube-api-access-xz8mq\") pod \"ovn-controller-ovs-qfgk4\" (UID: \"36067e45-f8de-4952-9372-564e0e9d850e\") " pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.568732 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-combined-ca-bundle\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.573992 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmk5c\" (UniqueName: \"kubernetes.io/projected/ce49eaf1-5cf3-4399-b2c9-c253df2440bd-kube-api-access-fmk5c\") pod \"ovn-controller-sr5tj\" (UID: \"ce49eaf1-5cf3-4399-b2c9-c253df2440bd\") " pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.697610 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj" Jan 30 17:13:34 crc kubenswrapper[4712]: I0130 17:13:34.707787 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.067903 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.072450 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.074466 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.074721 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.076003 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-qmfv8" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.076023 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.077109 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.082345 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.157983 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6820a928-0d59-463e-8d88-aef9b2242388-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.158068 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.158343 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.158616 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6820a928-0d59-463e-8d88-aef9b2242388-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.158936 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.159221 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqw8q\" (UniqueName: \"kubernetes.io/projected/6820a928-0d59-463e-8d88-aef9b2242388-kube-api-access-zqw8q\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.159484 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.159537 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6820a928-0d59-463e-8d88-aef9b2242388-config\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262077 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262194 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262348 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6820a928-0d59-463e-8d88-aef9b2242388-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262533 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262631 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqw8q\" (UniqueName: \"kubernetes.io/projected/6820a928-0d59-463e-8d88-aef9b2242388-kube-api-access-zqw8q\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262758 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262857 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6820a928-0d59-463e-8d88-aef9b2242388-config\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.262995 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6820a928-0d59-463e-8d88-aef9b2242388-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.263320 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.269507 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.501766 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6820a928-0d59-463e-8d88-aef9b2242388-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.506680 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6820a928-0d59-463e-8d88-aef9b2242388-config\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.506999 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.508065 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqw8q\" (UniqueName: \"kubernetes.io/projected/6820a928-0d59-463e-8d88-aef9b2242388-kube-api-access-zqw8q\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.508757 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6820a928-0d59-463e-8d88-aef9b2242388-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.519521 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6820a928-0d59-463e-8d88-aef9b2242388-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.580651 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"6820a928-0d59-463e-8d88-aef9b2242388\") " pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:35 crc kubenswrapper[4712]: I0130 17:13:35.704425 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 30 17:13:36 crc kubenswrapper[4712]: I0130 17:13:36.272219 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:13:36 crc kubenswrapper[4712]: I0130 17:13:36.272544 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.474509 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.477249 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.481600 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.481887 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.482143 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.483246 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.484228 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-rfvv5" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640206 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220f56ca-28d1-4856-98cc-e420bd3cce95-config\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640280 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tp4w\" (UniqueName: \"kubernetes.io/projected/220f56ca-28d1-4856-98cc-e420bd3cce95-kube-api-access-9tp4w\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640433 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640492 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640563 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220f56ca-28d1-4856-98cc-e420bd3cce95-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640633 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220f56ca-28d1-4856-98cc-e420bd3cce95-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640666 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.640691 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.742064 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220f56ca-28d1-4856-98cc-e420bd3cce95-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.742479 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220f56ca-28d1-4856-98cc-e420bd3cce95-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.743576 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.744253 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.744364 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220f56ca-28d1-4856-98cc-e420bd3cce95-config\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.744496 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tp4w\" (UniqueName: \"kubernetes.io/projected/220f56ca-28d1-4856-98cc-e420bd3cce95-kube-api-access-9tp4w\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.742582 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/220f56ca-28d1-4856-98cc-e420bd3cce95-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.743525 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/220f56ca-28d1-4856-98cc-e420bd3cce95-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.744733 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.744828 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.745043 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") device mount path \"/mnt/openstack/pv12\"" pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.745090 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/220f56ca-28d1-4856-98cc-e420bd3cce95-config\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.750768 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.754489 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.764559 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/220f56ca-28d1-4856-98cc-e420bd3cce95-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.786909 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.788812 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tp4w\" (UniqueName: \"kubernetes.io/projected/220f56ca-28d1-4856-98cc-e420bd3cce95-kube-api-access-9tp4w\") pod \"ovsdbserver-sb-0\" (UID: \"220f56ca-28d1-4856-98cc-e420bd3cce95\") " pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:37 crc kubenswrapper[4712]: I0130 17:13:37.795215 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 30 17:13:57 crc kubenswrapper[4712]: I0130 17:13:57.188923 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sr5tj"] Jan 30 17:13:57 crc kubenswrapper[4712]: I0130 17:13:57.621941 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.794845 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.795117 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bpknx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-hkhst_openstack(0456e317-8ed6-456a-ba12-c46dc30f11a3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.797086 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" podUID="0456e317-8ed6-456a-ba12-c46dc30f11a3" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.809864 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.810086 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mkdb5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-28pr4_openstack(dd9423d9-2a7c-4894-8c85-007ebf09a364): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.811501 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" podUID="dd9423d9-2a7c-4894-8c85-007ebf09a364" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.817023 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.817189 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gcqtk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-mcl9p_openstack(ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.819285 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" podUID="ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.835289 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.835506 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sk8pr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-gdf88_openstack(60a618eb-268d-4b06-bd4a-3365bffb6a69): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:13:57 crc kubenswrapper[4712]: E0130 17:13:57.836776 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" podUID="60a618eb-268d-4b06-bd4a-3365bffb6a69" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.091965 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6820a928-0d59-463e-8d88-aef9b2242388","Type":"ContainerStarted","Data":"8439fabe37af9d5d35a3560d44c8ae1707137e1fc100e14bc99cd23f7081c350"} Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.093884 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj" event={"ID":"ce49eaf1-5cf3-4399-b2c9-c253df2440bd","Type":"ContainerStarted","Data":"56e609554035756b564cf9ef24c22775151107a67d6ce3d07eb161723fca1948"} Jan 30 17:13:58 crc kubenswrapper[4712]: E0130 17:13:58.096335 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" podUID="0456e317-8ed6-456a-ba12-c46dc30f11a3" Jan 30 17:13:58 crc kubenswrapper[4712]: E0130 17:13:58.096543 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" podUID="ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.460156 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.521259 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.606982 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a618eb-268d-4b06-bd4a-3365bffb6a69-config\") pod \"60a618eb-268d-4b06-bd4a-3365bffb6a69\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.607028 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk8pr\" (UniqueName: \"kubernetes.io/projected/60a618eb-268d-4b06-bd4a-3365bffb6a69-kube-api-access-sk8pr\") pod \"60a618eb-268d-4b06-bd4a-3365bffb6a69\" (UID: \"60a618eb-268d-4b06-bd4a-3365bffb6a69\") " Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.607562 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60a618eb-268d-4b06-bd4a-3365bffb6a69-config" (OuterVolumeSpecName: "config") pod "60a618eb-268d-4b06-bd4a-3365bffb6a69" (UID: "60a618eb-268d-4b06-bd4a-3365bffb6a69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.610983 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60a618eb-268d-4b06-bd4a-3365bffb6a69-kube-api-access-sk8pr" (OuterVolumeSpecName: "kube-api-access-sk8pr") pod "60a618eb-268d-4b06-bd4a-3365bffb6a69" (UID: "60a618eb-268d-4b06-bd4a-3365bffb6a69"). InnerVolumeSpecName "kube-api-access-sk8pr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.708774 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60a618eb-268d-4b06-bd4a-3365bffb6a69-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.708825 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk8pr\" (UniqueName: \"kubernetes.io/projected/60a618eb-268d-4b06-bd4a-3365bffb6a69-kube-api-access-sk8pr\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:58 crc kubenswrapper[4712]: I0130 17:13:58.742160 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-qfgk4"] Jan 30 17:13:58 crc kubenswrapper[4712]: W0130 17:13:58.931667 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36067e45_f8de_4952_9372_564e0e9d850e.slice/crio-e0fc070a522f8a4ae5730a4ba753b954a4eca49eb75947fff3d92e7fdf1f76fc WatchSource:0}: Error finding container e0fc070a522f8a4ae5730a4ba753b954a4eca49eb75947fff3d92e7fdf1f76fc: Status 404 returned error can't find the container with id e0fc070a522f8a4ae5730a4ba753b954a4eca49eb75947fff3d92e7fdf1f76fc Jan 30 17:13:58 crc kubenswrapper[4712]: E0130 17:13:58.935605 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 17:13:58 crc kubenswrapper[4712]: E0130 17:13:58.935686 4712 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0" Jan 30 17:13:58 crc kubenswrapper[4712]: E0130 17:13:58.935972 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-state-metrics,Image:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,Command:[],Args:[--resources=pods --namespaces=openstack],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},ContainerPort{Name:telemetry,HostPort:0,ContainerPort:8081,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-gxzzd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-state-metrics-0_openstack(e88ea344-4eb8-4174-9ce7-855aa6afed59): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 17:13:58 crc kubenswrapper[4712]: E0130 17:13:58.937108 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/kube-state-metrics-0" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.101393 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfgk4" event={"ID":"36067e45-f8de-4952-9372-564e0e9d850e","Type":"ContainerStarted","Data":"e0fc070a522f8a4ae5730a4ba753b954a4eca49eb75947fff3d92e7fdf1f76fc"} Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.103061 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" event={"ID":"dd9423d9-2a7c-4894-8c85-007ebf09a364","Type":"ContainerDied","Data":"03cb48fe18594b8ab254eef081d0c7f4b6ea32d35f5b1990a5f3b28a59f31d18"} Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.103088 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03cb48fe18594b8ab254eef081d0c7f4b6ea32d35f5b1990a5f3b28a59f31d18" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.104347 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.104365 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-gdf88" event={"ID":"60a618eb-268d-4b06-bd4a-3365bffb6a69","Type":"ContainerDied","Data":"d4094b6ee93b73f1ab1624961d47aed7476d5023b6ea450be6f1021074033561"} Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.106889 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220f56ca-28d1-4856-98cc-e420bd3cce95","Type":"ContainerStarted","Data":"4150f45bb67bb34a529bb7ae045f53e3819416840d66c6bc8ecd76a4ef858560"} Jan 30 17:13:59 crc kubenswrapper[4712]: E0130 17:13:59.108294 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-state-metrics\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0\\\"\"" pod="openstack/kube-state-metrics-0" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.156480 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.217265 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkdb5\" (UniqueName: \"kubernetes.io/projected/dd9423d9-2a7c-4894-8c85-007ebf09a364-kube-api-access-mkdb5\") pod \"dd9423d9-2a7c-4894-8c85-007ebf09a364\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.217437 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-config\") pod \"dd9423d9-2a7c-4894-8c85-007ebf09a364\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.217481 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-dns-svc\") pod \"dd9423d9-2a7c-4894-8c85-007ebf09a364\" (UID: \"dd9423d9-2a7c-4894-8c85-007ebf09a364\") " Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.220494 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dd9423d9-2a7c-4894-8c85-007ebf09a364" (UID: "dd9423d9-2a7c-4894-8c85-007ebf09a364"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.220762 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-config" (OuterVolumeSpecName: "config") pod "dd9423d9-2a7c-4894-8c85-007ebf09a364" (UID: "dd9423d9-2a7c-4894-8c85-007ebf09a364"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.231972 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd9423d9-2a7c-4894-8c85-007ebf09a364-kube-api-access-mkdb5" (OuterVolumeSpecName: "kube-api-access-mkdb5") pod "dd9423d9-2a7c-4894-8c85-007ebf09a364" (UID: "dd9423d9-2a7c-4894-8c85-007ebf09a364"). InnerVolumeSpecName "kube-api-access-mkdb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.248080 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gdf88"] Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.270216 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-gdf88"] Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.320321 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.320343 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkdb5\" (UniqueName: \"kubernetes.io/projected/dd9423d9-2a7c-4894-8c85-007ebf09a364-kube-api-access-mkdb5\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.320353 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dd9423d9-2a7c-4894-8c85-007ebf09a364-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:13:59 crc kubenswrapper[4712]: E0130 17:13:59.363540 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60a618eb_268d_4b06_bd4a_3365bffb6a69.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60a618eb_268d_4b06_bd4a_3365bffb6a69.slice/crio-d4094b6ee93b73f1ab1624961d47aed7476d5023b6ea450be6f1021074033561\": RecentStats: unable to find data in memory cache]" Jan 30 17:13:59 crc kubenswrapper[4712]: I0130 17:13:59.813299 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60a618eb-268d-4b06-bd4a-3365bffb6a69" path="/var/lib/kubelet/pods/60a618eb-268d-4b06-bd4a-3365bffb6a69/volumes" Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.113264 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3","Type":"ContainerStarted","Data":"eed58d3b48c65ec6266ae6d9cd6fee887d50f3f752e3618578448ca69a527ad8"} Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.116921 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"9fecd346-f2cb-45fa-be64-6be579acaf56","Type":"ContainerStarted","Data":"529316bc6fa58a5da52da9c81ff823294770d719d2da8b6b3c85036b20458144"} Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.117604 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.128520 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01b5b85b-caea-4f70-a61f-875ed30f9e64","Type":"ContainerStarted","Data":"2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935"} Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.146008 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-28pr4" Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.147073 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a12f0a95-1db0-4dd9-993c-1413c0fa10b0","Type":"ContainerStarted","Data":"89474fb9cdcec3cdc5111db5852f223098f64f99a3a442672920a551abcf6861"} Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.198967 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=5.7410404360000005 podStartE2EDuration="32.198944265s" podCreationTimestamp="2026-01-30 17:13:28 +0000 UTC" firstStartedPulling="2026-01-30 17:13:30.288194269 +0000 UTC m=+1147.195203738" lastFinishedPulling="2026-01-30 17:13:56.746098098 +0000 UTC m=+1173.653107567" observedRunningTime="2026-01-30 17:14:00.158723239 +0000 UTC m=+1177.065732708" watchObservedRunningTime="2026-01-30 17:14:00.198944265 +0000 UTC m=+1177.105953784" Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.293643 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-28pr4"] Jan 30 17:14:00 crc kubenswrapper[4712]: I0130 17:14:00.304737 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-28pr4"] Jan 30 17:14:01 crc kubenswrapper[4712]: I0130 17:14:01.158443 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d5b67399-3a53-4694-8f1c-c04592426dcd","Type":"ContainerStarted","Data":"a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c"} Jan 30 17:14:01 crc kubenswrapper[4712]: I0130 17:14:01.808221 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd9423d9-2a7c-4894-8c85-007ebf09a364" path="/var/lib/kubelet/pods/dd9423d9-2a7c-4894-8c85-007ebf09a364/volumes" Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.178586 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6820a928-0d59-463e-8d88-aef9b2242388","Type":"ContainerStarted","Data":"25cb301a305312e55edbf4b19e0f6e70624de3d8e10525dd3204488922d24703"} Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.181846 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj" event={"ID":"ce49eaf1-5cf3-4399-b2c9-c253df2440bd","Type":"ContainerStarted","Data":"dd1dea519bf59dccf350ce6a11f55b3fffd5899f765932697bce1ee9cc328cac"} Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.182006 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-sr5tj" Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.185668 4712 generic.go:334] "Generic (PLEG): container finished" podID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerID="89474fb9cdcec3cdc5111db5852f223098f64f99a3a442672920a551abcf6861" exitCode=0 Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.185757 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a12f0a95-1db0-4dd9-993c-1413c0fa10b0","Type":"ContainerDied","Data":"89474fb9cdcec3cdc5111db5852f223098f64f99a3a442672920a551abcf6861"} Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.189440 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220f56ca-28d1-4856-98cc-e420bd3cce95","Type":"ContainerStarted","Data":"37d2614e49c654297ebca98dbb597a1a961d44e31dd49be8c16e502479da0cf8"} Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.193300 4712 generic.go:334] "Generic (PLEG): container finished" podID="36067e45-f8de-4952-9372-564e0e9d850e" containerID="756974e384baa333e17b5b30f4652f8baefced7847c8cad0e516930c6aaba04e" exitCode=0 Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.193363 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfgk4" event={"ID":"36067e45-f8de-4952-9372-564e0e9d850e","Type":"ContainerDied","Data":"756974e384baa333e17b5b30f4652f8baefced7847c8cad0e516930c6aaba04e"} Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.195287 4712 generic.go:334] "Generic (PLEG): container finished" podID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerID="eed58d3b48c65ec6266ae6d9cd6fee887d50f3f752e3618578448ca69a527ad8" exitCode=0 Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.195323 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3","Type":"ContainerDied","Data":"eed58d3b48c65ec6266ae6d9cd6fee887d50f3f752e3618578448ca69a527ad8"} Jan 30 17:14:03 crc kubenswrapper[4712]: I0130 17:14:03.204290 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-sr5tj" podStartSLOduration=24.36531756 podStartE2EDuration="29.204271527s" podCreationTimestamp="2026-01-30 17:13:34 +0000 UTC" firstStartedPulling="2026-01-30 17:13:57.760828999 +0000 UTC m=+1174.667838468" lastFinishedPulling="2026-01-30 17:14:02.599782966 +0000 UTC m=+1179.506792435" observedRunningTime="2026-01-30 17:14:03.202814332 +0000 UTC m=+1180.109823811" watchObservedRunningTime="2026-01-30 17:14:03.204271527 +0000 UTC m=+1180.111281006" Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.208014 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a12f0a95-1db0-4dd9-993c-1413c0fa10b0","Type":"ContainerStarted","Data":"3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da"} Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.212591 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfgk4" event={"ID":"36067e45-f8de-4952-9372-564e0e9d850e","Type":"ContainerStarted","Data":"bac81252e0483e4ee3349212acc702886022983f08d092a78c047e0461b7096f"} Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.212867 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-qfgk4" event={"ID":"36067e45-f8de-4952-9372-564e0e9d850e","Type":"ContainerStarted","Data":"33f45611996abe484920587f0895d4a1bbe2c218ecbb43cd3e0a107e2adbca89"} Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.212948 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.212997 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.223653 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3","Type":"ContainerStarted","Data":"70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a"} Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.240100 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=9.32731108 podStartE2EDuration="39.240015184s" podCreationTimestamp="2026-01-30 17:13:25 +0000 UTC" firstStartedPulling="2026-01-30 17:13:27.920373773 +0000 UTC m=+1144.827383242" lastFinishedPulling="2026-01-30 17:13:57.833077877 +0000 UTC m=+1174.740087346" observedRunningTime="2026-01-30 17:14:04.2323943 +0000 UTC m=+1181.139403819" watchObservedRunningTime="2026-01-30 17:14:04.240015184 +0000 UTC m=+1181.147024663" Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.276597 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-qfgk4" podStartSLOduration=26.626813141 podStartE2EDuration="30.276568213s" podCreationTimestamp="2026-01-30 17:13:34 +0000 UTC" firstStartedPulling="2026-01-30 17:13:58.942630647 +0000 UTC m=+1175.849640116" lastFinishedPulling="2026-01-30 17:14:02.592385729 +0000 UTC m=+1179.499395188" observedRunningTime="2026-01-30 17:14:04.264771138 +0000 UTC m=+1181.171780617" watchObservedRunningTime="2026-01-30 17:14:04.276568213 +0000 UTC m=+1181.183577702" Jan 30 17:14:04 crc kubenswrapper[4712]: I0130 17:14:04.295265 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=11.279883176 podStartE2EDuration="37.295246021s" podCreationTimestamp="2026-01-30 17:13:27 +0000 UTC" firstStartedPulling="2026-01-30 17:13:30.308156076 +0000 UTC m=+1147.215165545" lastFinishedPulling="2026-01-30 17:13:56.323518921 +0000 UTC m=+1173.230528390" observedRunningTime="2026-01-30 17:14:04.288666093 +0000 UTC m=+1181.195675572" watchObservedRunningTime="2026-01-30 17:14:04.295246021 +0000 UTC m=+1181.202255500" Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.231569 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"6820a928-0d59-463e-8d88-aef9b2242388","Type":"ContainerStarted","Data":"46402f790280cecbd55e15bdfd8c426380b42a08c6f77887f10369f182f6560a"} Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.233829 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"220f56ca-28d1-4856-98cc-e420bd3cce95","Type":"ContainerStarted","Data":"b765d956c08237d2436e58bb802c5e492a198e67f95ef42f6257ceacfc66adc8"} Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.254652 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=24.540782408 podStartE2EDuration="31.254634453s" podCreationTimestamp="2026-01-30 17:13:34 +0000 UTC" firstStartedPulling="2026-01-30 17:13:57.766318961 +0000 UTC m=+1174.673328430" lastFinishedPulling="2026-01-30 17:14:04.480171006 +0000 UTC m=+1181.387180475" observedRunningTime="2026-01-30 17:14:05.248842664 +0000 UTC m=+1182.155852153" watchObservedRunningTime="2026-01-30 17:14:05.254634453 +0000 UTC m=+1182.161643922" Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.272256 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=23.328611875 podStartE2EDuration="29.272234366s" podCreationTimestamp="2026-01-30 17:13:36 +0000 UTC" firstStartedPulling="2026-01-30 17:13:58.510106 +0000 UTC m=+1175.417115469" lastFinishedPulling="2026-01-30 17:14:04.453728491 +0000 UTC m=+1181.360737960" observedRunningTime="2026-01-30 17:14:05.268711501 +0000 UTC m=+1182.175720990" watchObservedRunningTime="2026-01-30 17:14:05.272234366 +0000 UTC m=+1182.179243835" Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.705976 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.706043 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 30 17:14:05 crc kubenswrapper[4712]: I0130 17:14:05.748368 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 30 17:14:06 crc kubenswrapper[4712]: I0130 17:14:06.271706 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:14:06 crc kubenswrapper[4712]: I0130 17:14:06.272206 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.120397 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.120897 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.281125 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.537023 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hkhst"] Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.712683 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-xvktw"] Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.714409 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.720126 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.737363 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-kjztv"] Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.738655 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.744254 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.747488 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-xvktw"] Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.756853 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kjztv"] Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.771902 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.771994 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4c718c29-458b-43e8-979f-f636b17928e1-ovn-rundir\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772032 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4c718c29-458b-43e8-979f-f636b17928e1-ovs-rundir\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772087 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-config\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772143 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcx9c\" (UniqueName: \"kubernetes.io/projected/8a65dfa6-7553-4761-876e-326ed5175b85-kube-api-access-bcx9c\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772172 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772194 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c718c29-458b-43e8-979f-f636b17928e1-combined-ca-bundle\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772226 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6nq5\" (UniqueName: \"kubernetes.io/projected/4c718c29-458b-43e8-979f-f636b17928e1-kube-api-access-b6nq5\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772263 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c718c29-458b-43e8-979f-f636b17928e1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.772304 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c718c29-458b-43e8-979f-f636b17928e1-config\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.797164 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.797215 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879097 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4c718c29-458b-43e8-979f-f636b17928e1-ovn-rundir\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879154 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4c718c29-458b-43e8-979f-f636b17928e1-ovs-rundir\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879186 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-config\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879252 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcx9c\" (UniqueName: \"kubernetes.io/projected/8a65dfa6-7553-4761-876e-326ed5175b85-kube-api-access-bcx9c\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879277 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879296 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c718c29-458b-43e8-979f-f636b17928e1-combined-ca-bundle\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879326 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6nq5\" (UniqueName: \"kubernetes.io/projected/4c718c29-458b-43e8-979f-f636b17928e1-kube-api-access-b6nq5\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879360 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c718c29-458b-43e8-979f-f636b17928e1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879415 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c718c29-458b-43e8-979f-f636b17928e1-config\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.879522 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.880658 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-ovsdbserver-nb\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.882388 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-config\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.883076 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/4c718c29-458b-43e8-979f-f636b17928e1-ovn-rundir\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.883132 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/4c718c29-458b-43e8-979f-f636b17928e1-ovs-rundir\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.883566 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c718c29-458b-43e8-979f-f636b17928e1-config\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.885253 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-dns-svc\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.891480 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c718c29-458b-43e8-979f-f636b17928e1-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.904343 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c718c29-458b-43e8-979f-f636b17928e1-combined-ca-bundle\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.926382 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6nq5\" (UniqueName: \"kubernetes.io/projected/4c718c29-458b-43e8-979f-f636b17928e1-kube-api-access-b6nq5\") pod \"ovn-controller-metrics-kjztv\" (UID: \"4c718c29-458b-43e8-979f-f636b17928e1\") " pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:07 crc kubenswrapper[4712]: I0130 17:14:07.937432 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcx9c\" (UniqueName: \"kubernetes.io/projected/8a65dfa6-7553-4761-876e-326ed5175b85-kube-api-access-bcx9c\") pod \"dnsmasq-dns-7fd796d7df-xvktw\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.018134 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mcl9p"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.025016 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.051219 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.072828 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-kjztv" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.081094 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mrlvk"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.086790 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.089568 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.135883 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mrlvk"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.197712 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-config\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.198128 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.198157 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.198243 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjgz8\" (UniqueName: \"kubernetes.io/projected/3931ce9e-e449-4e24-b826-0d78e42d0b52-kube-api-access-tjgz8\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.198272 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.202588 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.266471 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.266943 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-hkhst" event={"ID":"0456e317-8ed6-456a-ba12-c46dc30f11a3","Type":"ContainerDied","Data":"f0f51bfa901a43270a0d6dc031dfd3150f47eea343d29cf45c019e9090e60a7c"} Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.300829 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpknx\" (UniqueName: \"kubernetes.io/projected/0456e317-8ed6-456a-ba12-c46dc30f11a3-kube-api-access-bpknx\") pod \"0456e317-8ed6-456a-ba12-c46dc30f11a3\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.300903 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-config\") pod \"0456e317-8ed6-456a-ba12-c46dc30f11a3\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.301017 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-dns-svc\") pod \"0456e317-8ed6-456a-ba12-c46dc30f11a3\" (UID: \"0456e317-8ed6-456a-ba12-c46dc30f11a3\") " Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.302107 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0456e317-8ed6-456a-ba12-c46dc30f11a3" (UID: "0456e317-8ed6-456a-ba12-c46dc30f11a3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.303567 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-config" (OuterVolumeSpecName: "config") pod "0456e317-8ed6-456a-ba12-c46dc30f11a3" (UID: "0456e317-8ed6-456a-ba12-c46dc30f11a3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.303718 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-config\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.303777 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.303876 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.304039 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjgz8\" (UniqueName: \"kubernetes.io/projected/3931ce9e-e449-4e24-b826-0d78e42d0b52-kube-api-access-tjgz8\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.304082 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.304202 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.304213 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0456e317-8ed6-456a-ba12-c46dc30f11a3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.304714 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-config\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.305486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.309089 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.312433 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0456e317-8ed6-456a-ba12-c46dc30f11a3-kube-api-access-bpknx" (OuterVolumeSpecName: "kube-api-access-bpknx") pod "0456e317-8ed6-456a-ba12-c46dc30f11a3" (UID: "0456e317-8ed6-456a-ba12-c46dc30f11a3"). InnerVolumeSpecName "kube-api-access-bpknx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.315089 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.335527 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.336112 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjgz8\" (UniqueName: \"kubernetes.io/projected/3931ce9e-e449-4e24-b826-0d78e42d0b52-kube-api-access-tjgz8\") pod \"dnsmasq-dns-86db49b7ff-mrlvk\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.405292 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpknx\" (UniqueName: \"kubernetes.io/projected/0456e317-8ed6-456a-ba12-c46dc30f11a3-kube-api-access-bpknx\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.489552 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.491589 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.530158 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.531672 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.558561 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.561440 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-77wlq" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.562035 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.563346 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.564719 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.611644 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-dns-svc\") pod \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.611982 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-config\") pod \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.612223 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcqtk\" (UniqueName: \"kubernetes.io/projected/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-kube-api-access-gcqtk\") pod \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\" (UID: \"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758\") " Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.612550 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.612691 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b017036-bac3-47fb-b6dc-97a3b85af99d-config\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.612830 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b017036-bac3-47fb-b6dc-97a3b85af99d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.612955 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdzd\" (UniqueName: \"kubernetes.io/projected/1b017036-bac3-47fb-b6dc-97a3b85af99d-kube-api-access-hjdzd\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.613089 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b017036-bac3-47fb-b6dc-97a3b85af99d-scripts\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.613281 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.613393 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.613484 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-config" (OuterVolumeSpecName: "config") pod "ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758" (UID: "ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.613706 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758" (UID: "ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.625073 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-kube-api-access-gcqtk" (OuterVolumeSpecName: "kube-api-access-gcqtk") pod "ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758" (UID: "ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758"). InnerVolumeSpecName "kube-api-access-gcqtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.736142 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b017036-bac3-47fb-b6dc-97a3b85af99d-config\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.736282 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b017036-bac3-47fb-b6dc-97a3b85af99d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.736325 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdzd\" (UniqueName: \"kubernetes.io/projected/1b017036-bac3-47fb-b6dc-97a3b85af99d-kube-api-access-hjdzd\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.736479 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b017036-bac3-47fb-b6dc-97a3b85af99d-scripts\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.737073 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/1b017036-bac3-47fb-b6dc-97a3b85af99d-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.737902 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.737938 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.737992 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.739672 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/1b017036-bac3-47fb-b6dc-97a3b85af99d-scripts\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.750381 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcqtk\" (UniqueName: \"kubernetes.io/projected/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-kube-api-access-gcqtk\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.750515 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.750526 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.757247 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1b017036-bac3-47fb-b6dc-97a3b85af99d-config\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.761754 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.761860 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.765639 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b017036-bac3-47fb-b6dc-97a3b85af99d-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.803105 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdzd\" (UniqueName: \"kubernetes.io/projected/1b017036-bac3-47fb-b6dc-97a3b85af99d-kube-api-access-hjdzd\") pod \"ovn-northd-0\" (UID: \"1b017036-bac3-47fb-b6dc-97a3b85af99d\") " pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.820136 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hkhst"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.843980 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-hkhst"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.861730 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-xvktw"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.878034 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-kjztv"] Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.895265 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.922696 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.935204 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 17:14:08 crc kubenswrapper[4712]: I0130 17:14:08.942128 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.289236 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" event={"ID":"ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758","Type":"ContainerDied","Data":"3504f1664d2ebd1176e2e7a8e8defb8193be2fc5798f86fc3f9bb83f4f89eaf5"} Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.289269 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-mcl9p" Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.299997 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" event={"ID":"8a65dfa6-7553-4761-876e-326ed5175b85","Type":"ContainerStarted","Data":"7007d8a7998b4bc19d0e3e9a26fed27c0d7fb71ab09ec9106ab697acb81a0b38"} Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.315259 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kjztv" event={"ID":"4c718c29-458b-43e8-979f-f636b17928e1","Type":"ContainerStarted","Data":"54e9eecce72d6a94238a1aac479202ca4aadeaf01cc01db02330ac5c617fcdd8"} Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.343632 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.376084 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mrlvk"] Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.424079 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mcl9p"] Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.434686 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-mcl9p"] Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.437586 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.528386 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.809292 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0456e317-8ed6-456a-ba12-c46dc30f11a3" path="/var/lib/kubelet/pods/0456e317-8ed6-456a-ba12-c46dc30f11a3/volumes" Jan 30 17:14:09 crc kubenswrapper[4712]: I0130 17:14:09.809696 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758" path="/var/lib/kubelet/pods/ee4dc480-bdc4-4526-8e4e-d6b5d7f8d758/volumes" Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.322447 4712 generic.go:334] "Generic (PLEG): container finished" podID="8a65dfa6-7553-4761-876e-326ed5175b85" containerID="96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed" exitCode=0 Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.322637 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" event={"ID":"8a65dfa6-7553-4761-876e-326ed5175b85","Type":"ContainerDied","Data":"96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed"} Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.327894 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-kjztv" event={"ID":"4c718c29-458b-43e8-979f-f636b17928e1","Type":"ContainerStarted","Data":"f72b8905a4ee11eeafd94c890f45ce29ba1f1257b8322d0c6771c79a3975b58d"} Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.330194 4712 generic.go:334] "Generic (PLEG): container finished" podID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerID="868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3" exitCode=0 Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.330246 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" event={"ID":"3931ce9e-e449-4e24-b826-0d78e42d0b52","Type":"ContainerDied","Data":"868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3"} Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.330268 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" event={"ID":"3931ce9e-e449-4e24-b826-0d78e42d0b52","Type":"ContainerStarted","Data":"fded3081b0168713cbdcd37ca0e5bb21770c89f89e617a10531ef7702780137f"} Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.332658 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b017036-bac3-47fb-b6dc-97a3b85af99d","Type":"ContainerStarted","Data":"6862f3300c08cba5fd0eff7adc8b30e0f0a2cd8f76bc7480600cd38262e21e9e"} Jan 30 17:14:10 crc kubenswrapper[4712]: I0130 17:14:10.394710 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-kjztv" podStartSLOduration=3.3946906869999998 podStartE2EDuration="3.394690687s" podCreationTimestamp="2026-01-30 17:14:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:14:10.392503295 +0000 UTC m=+1187.299512764" watchObservedRunningTime="2026-01-30 17:14:10.394690687 +0000 UTC m=+1187.301700156" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.175089 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-xvktw"] Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.232065 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-nk4ll"] Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.233905 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.259300 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nk4ll"] Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.303569 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-config\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.303658 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.303702 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.303727 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9496l\" (UniqueName: \"kubernetes.io/projected/b972b675-2edc-44ba-bc15-aa835aeef29d-kube-api-access-9496l\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.303864 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-dns-svc\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.341864 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b017036-bac3-47fb-b6dc-97a3b85af99d","Type":"ContainerStarted","Data":"55f0917b88abb6358e6a48964e0a58c6a881619c55d65faf87a0de74de8b9cb0"} Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.342214 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"1b017036-bac3-47fb-b6dc-97a3b85af99d","Type":"ContainerStarted","Data":"59e86b3180d9e9d575e68cda0151b2b1d11a3c26567dffc8ccd9953563417448"} Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.342237 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.344726 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" event={"ID":"8a65dfa6-7553-4761-876e-326ed5175b85","Type":"ContainerStarted","Data":"dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f"} Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.344813 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.346614 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" event={"ID":"3931ce9e-e449-4e24-b826-0d78e42d0b52","Type":"ContainerStarted","Data":"ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7"} Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.361867 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.278134626 podStartE2EDuration="3.361844066s" podCreationTimestamp="2026-01-30 17:14:08 +0000 UTC" firstStartedPulling="2026-01-30 17:14:09.377849805 +0000 UTC m=+1186.284859274" lastFinishedPulling="2026-01-30 17:14:10.461559245 +0000 UTC m=+1187.368568714" observedRunningTime="2026-01-30 17:14:11.359628923 +0000 UTC m=+1188.266638392" watchObservedRunningTime="2026-01-30 17:14:11.361844066 +0000 UTC m=+1188.268853535" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.404834 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.404892 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9496l\" (UniqueName: \"kubernetes.io/projected/b972b675-2edc-44ba-bc15-aa835aeef29d-kube-api-access-9496l\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.405083 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-dns-svc\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.405163 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-config\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.405243 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.406740 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.407004 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-dns-svc\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.408536 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.409191 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-config\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.421080 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" podStartSLOduration=3.855258019 podStartE2EDuration="4.421058129s" podCreationTimestamp="2026-01-30 17:14:07 +0000 UTC" firstStartedPulling="2026-01-30 17:14:08.882072708 +0000 UTC m=+1185.789082177" lastFinishedPulling="2026-01-30 17:14:09.447872818 +0000 UTC m=+1186.354882287" observedRunningTime="2026-01-30 17:14:11.381781355 +0000 UTC m=+1188.288790844" watchObservedRunningTime="2026-01-30 17:14:11.421058129 +0000 UTC m=+1188.328067598" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.456694 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9496l\" (UniqueName: \"kubernetes.io/projected/b972b675-2edc-44ba-bc15-aa835aeef29d-kube-api-access-9496l\") pod \"dnsmasq-dns-698758b865-nk4ll\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.543569 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.558722 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.571656 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" podStartSLOduration=3.16148869 podStartE2EDuration="3.571635199s" podCreationTimestamp="2026-01-30 17:14:08 +0000 UTC" firstStartedPulling="2026-01-30 17:14:09.369355801 +0000 UTC m=+1186.276365270" lastFinishedPulling="2026-01-30 17:14:09.77950231 +0000 UTC m=+1186.686511779" observedRunningTime="2026-01-30 17:14:11.417796771 +0000 UTC m=+1188.324806240" watchObservedRunningTime="2026-01-30 17:14:11.571635199 +0000 UTC m=+1188.478644668" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.703179 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 17:14:11 crc kubenswrapper[4712]: I0130 17:14:11.845433 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nk4ll"] Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.350217 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.356504 4712 generic.go:334] "Generic (PLEG): container finished" podID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerID="f20ab14196d9afc0b69831cf6e4bd5e2b276a9fcac3986b2ad49e7c9ae1f4113" exitCode=0 Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.357564 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" containerName="dnsmasq-dns" containerID="cri-o://dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f" gracePeriod=10 Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.359157 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nk4ll" event={"ID":"b972b675-2edc-44ba-bc15-aa835aeef29d","Type":"ContainerDied","Data":"f20ab14196d9afc0b69831cf6e4bd5e2b276a9fcac3986b2ad49e7c9ae1f4113"} Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.359208 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nk4ll" event={"ID":"b972b675-2edc-44ba-bc15-aa835aeef29d","Type":"ContainerStarted","Data":"43dcdb45b593e0a7efb4cc41da1d1364106f57a31f84f636cde638e001c70517"} Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.359327 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.360049 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.370415 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.370703 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.370935 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.371055 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-726br" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.388517 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.522456 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.522572 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cf4hg\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-kube-api-access-cf4hg\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.522644 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-lock\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.522689 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-cache\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.522716 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.522754 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.625971 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-lock\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.626201 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-cache\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.626247 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.626286 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.626478 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.626548 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cf4hg\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-kube-api-access-cf4hg\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.626995 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-lock\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.627374 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.628226 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-cache\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: E0130 17:14:12.628347 4712 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 17:14:12 crc kubenswrapper[4712]: E0130 17:14:12.628374 4712 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 17:14:12 crc kubenswrapper[4712]: E0130 17:14:12.628421 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift podName:b46c7f41-9ce5-4625-98d5-74bafa8bd0de nodeName:}" failed. No retries permitted until 2026-01-30 17:14:13.128402161 +0000 UTC m=+1190.035411630 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift") pod "swift-storage-0" (UID: "b46c7f41-9ce5-4625-98d5-74bafa8bd0de") : configmap "swift-ring-files" not found Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.632341 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.684066 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.692032 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cf4hg\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-kube-api-access-cf4hg\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.792075 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.935877 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-dns-svc\") pod \"8a65dfa6-7553-4761-876e-326ed5175b85\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.936040 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-config\") pod \"8a65dfa6-7553-4761-876e-326ed5175b85\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.936639 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcx9c\" (UniqueName: \"kubernetes.io/projected/8a65dfa6-7553-4761-876e-326ed5175b85-kube-api-access-bcx9c\") pod \"8a65dfa6-7553-4761-876e-326ed5175b85\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.936824 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-ovsdbserver-nb\") pod \"8a65dfa6-7553-4761-876e-326ed5175b85\" (UID: \"8a65dfa6-7553-4761-876e-326ed5175b85\") " Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.958094 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a65dfa6-7553-4761-876e-326ed5175b85-kube-api-access-bcx9c" (OuterVolumeSpecName: "kube-api-access-bcx9c") pod "8a65dfa6-7553-4761-876e-326ed5175b85" (UID: "8a65dfa6-7553-4761-876e-326ed5175b85"). InnerVolumeSpecName "kube-api-access-bcx9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.964771 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-9fw4k"] Jan 30 17:14:12 crc kubenswrapper[4712]: E0130 17:14:12.965153 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" containerName="init" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.965169 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" containerName="init" Jan 30 17:14:12 crc kubenswrapper[4712]: E0130 17:14:12.965196 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" containerName="dnsmasq-dns" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.965202 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" containerName="dnsmasq-dns" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.965351 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" containerName="dnsmasq-dns" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.965879 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.983502 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.983772 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.983886 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 30 17:14:12 crc kubenswrapper[4712]: I0130 17:14:12.989374 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9fw4k"] Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.038811 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-ring-data-devices\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.038877 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6cda925-aa9c-401f-90bb-158535201367-etc-swift\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.038898 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-combined-ca-bundle\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.038930 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-dispersionconf\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.038978 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-scripts\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.038999 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-swiftconf\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.039018 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzp8j\" (UniqueName: \"kubernetes.io/projected/b6cda925-aa9c-401f-90bb-158535201367-kube-api-access-fzp8j\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.039067 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcx9c\" (UniqueName: \"kubernetes.io/projected/8a65dfa6-7553-4761-876e-326ed5175b85-kube-api-access-bcx9c\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.051470 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8a65dfa6-7553-4761-876e-326ed5175b85" (UID: "8a65dfa6-7553-4761-876e-326ed5175b85"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.069326 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-config" (OuterVolumeSpecName: "config") pod "8a65dfa6-7553-4761-876e-326ed5175b85" (UID: "8a65dfa6-7553-4761-876e-326ed5175b85"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.076915 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8a65dfa6-7553-4761-876e-326ed5175b85" (UID: "8a65dfa6-7553-4761-876e-326ed5175b85"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.140753 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-scripts\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141104 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-swiftconf\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141580 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzp8j\" (UniqueName: \"kubernetes.io/projected/b6cda925-aa9c-401f-90bb-158535201367-kube-api-access-fzp8j\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141720 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-ring-data-devices\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141774 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141817 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6cda925-aa9c-401f-90bb-158535201367-etc-swift\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141843 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-combined-ca-bundle\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.141921 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-dispersionconf\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.142045 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.142062 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.142072 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8a65dfa6-7553-4761-876e-326ed5175b85-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.142070 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-scripts\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: E0130 17:14:13.142369 4712 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.142519 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6cda925-aa9c-401f-90bb-158535201367-etc-swift\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: E0130 17:14:13.142536 4712 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 17:14:13 crc kubenswrapper[4712]: E0130 17:14:13.142632 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift podName:b46c7f41-9ce5-4625-98d5-74bafa8bd0de nodeName:}" failed. No retries permitted until 2026-01-30 17:14:14.142610721 +0000 UTC m=+1191.049620190 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift") pod "swift-storage-0" (UID: "b46c7f41-9ce5-4625-98d5-74bafa8bd0de") : configmap "swift-ring-files" not found Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.143247 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-ring-data-devices\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.146619 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-combined-ca-bundle\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.147065 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-dispersionconf\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.149115 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-swiftconf\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.169741 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzp8j\" (UniqueName: \"kubernetes.io/projected/b6cda925-aa9c-401f-90bb-158535201367-kube-api-access-fzp8j\") pod \"swift-ring-rebalance-9fw4k\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.366331 4712 generic.go:334] "Generic (PLEG): container finished" podID="8a65dfa6-7553-4761-876e-326ed5175b85" containerID="dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f" exitCode=0 Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.366374 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" event={"ID":"8a65dfa6-7553-4761-876e-326ed5175b85","Type":"ContainerDied","Data":"dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f"} Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.367533 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" event={"ID":"8a65dfa6-7553-4761-876e-326ed5175b85","Type":"ContainerDied","Data":"7007d8a7998b4bc19d0e3e9a26fed27c0d7fb71ab09ec9106ab697acb81a0b38"} Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.367658 4712 scope.go:117] "RemoveContainer" containerID="dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.366460 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7fd796d7df-xvktw" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.371820 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nk4ll" event={"ID":"b972b675-2edc-44ba-bc15-aa835aeef29d","Type":"ContainerStarted","Data":"cd49cca49c962514975207ccdbc9e67b1400d099d5f63a47cf5027e0d9e4230c"} Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.371944 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.389825 4712 scope.go:117] "RemoveContainer" containerID="96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.405252 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-nk4ll" podStartSLOduration=2.405225794 podStartE2EDuration="2.405225794s" podCreationTimestamp="2026-01-30 17:14:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:14:13.394837024 +0000 UTC m=+1190.301846503" watchObservedRunningTime="2026-01-30 17:14:13.405225794 +0000 UTC m=+1190.312235263" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.416647 4712 scope.go:117] "RemoveContainer" containerID="dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f" Jan 30 17:14:13 crc kubenswrapper[4712]: E0130 17:14:13.418328 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f\": container with ID starting with dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f not found: ID does not exist" containerID="dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.418376 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f"} err="failed to get container status \"dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f\": rpc error: code = NotFound desc = could not find container \"dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f\": container with ID starting with dc327f651e8ca5cb0e165be770016a5ef166a8aa7b394cde5eabf6795c72174f not found: ID does not exist" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.418443 4712 scope.go:117] "RemoveContainer" containerID="96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed" Jan 30 17:14:13 crc kubenswrapper[4712]: E0130 17:14:13.419403 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed\": container with ID starting with 96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed not found: ID does not exist" containerID="96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.419441 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed"} err="failed to get container status \"96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed\": rpc error: code = NotFound desc = could not find container \"96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed\": container with ID starting with 96205477f9b5b77c66c1b9c9a1616dbd2e3c253ccad9b9b43046c1404f47abed not found: ID does not exist" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.427618 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.428712 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-xvktw"] Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.443255 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7fd796d7df-xvktw"] Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.809852 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a65dfa6-7553-4761-876e-326ed5175b85" path="/var/lib/kubelet/pods/8a65dfa6-7553-4761-876e-326ed5175b85/volumes" Jan 30 17:14:13 crc kubenswrapper[4712]: I0130 17:14:13.858770 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-9fw4k"] Jan 30 17:14:13 crc kubenswrapper[4712]: W0130 17:14:13.863284 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6cda925_aa9c_401f_90bb_158535201367.slice/crio-c058c7118ef9cf09c0556e46d14c2cc3d643f62de7e43ceaa6191f158d8481a5 WatchSource:0}: Error finding container c058c7118ef9cf09c0556e46d14c2cc3d643f62de7e43ceaa6191f158d8481a5: Status 404 returned error can't find the container with id c058c7118ef9cf09c0556e46d14c2cc3d643f62de7e43ceaa6191f158d8481a5 Jan 30 17:14:14 crc kubenswrapper[4712]: I0130 17:14:14.157400 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:14 crc kubenswrapper[4712]: E0130 17:14:14.157835 4712 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 17:14:14 crc kubenswrapper[4712]: E0130 17:14:14.157993 4712 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 17:14:14 crc kubenswrapper[4712]: E0130 17:14:14.158055 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift podName:b46c7f41-9ce5-4625-98d5-74bafa8bd0de nodeName:}" failed. No retries permitted until 2026-01-30 17:14:16.158036559 +0000 UTC m=+1193.065046028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift") pod "swift-storage-0" (UID: "b46c7f41-9ce5-4625-98d5-74bafa8bd0de") : configmap "swift-ring-files" not found Jan 30 17:14:14 crc kubenswrapper[4712]: I0130 17:14:14.389270 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e88ea344-4eb8-4174-9ce7-855aa6afed59","Type":"ContainerStarted","Data":"d4d0184806d44cb107882cf97cfdd22f429f4ff19dc32d6419d8f4820d31d23f"} Jan 30 17:14:14 crc kubenswrapper[4712]: I0130 17:14:14.390420 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 17:14:14 crc kubenswrapper[4712]: I0130 17:14:14.391921 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9fw4k" event={"ID":"b6cda925-aa9c-401f-90bb-158535201367","Type":"ContainerStarted","Data":"c058c7118ef9cf09c0556e46d14c2cc3d643f62de7e43ceaa6191f158d8481a5"} Jan 30 17:14:14 crc kubenswrapper[4712]: I0130 17:14:14.408051 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=3.26571361 podStartE2EDuration="44.408032949s" podCreationTimestamp="2026-01-30 17:13:30 +0000 UTC" firstStartedPulling="2026-01-30 17:13:32.095934491 +0000 UTC m=+1149.002943960" lastFinishedPulling="2026-01-30 17:14:13.23825383 +0000 UTC m=+1190.145263299" observedRunningTime="2026-01-30 17:14:14.405739854 +0000 UTC m=+1191.312749333" watchObservedRunningTime="2026-01-30 17:14:14.408032949 +0000 UTC m=+1191.315042418" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.753389 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-2dg7m"] Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.757183 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.760899 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.764748 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2dg7m"] Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.885981 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z82xs\" (UniqueName: \"kubernetes.io/projected/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-kube-api-access-z82xs\") pod \"root-account-create-update-2dg7m\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.886432 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-operator-scripts\") pod \"root-account-create-update-2dg7m\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.987939 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z82xs\" (UniqueName: \"kubernetes.io/projected/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-kube-api-access-z82xs\") pod \"root-account-create-update-2dg7m\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.988021 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-operator-scripts\") pod \"root-account-create-update-2dg7m\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:15 crc kubenswrapper[4712]: I0130 17:14:15.988887 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-operator-scripts\") pod \"root-account-create-update-2dg7m\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:16 crc kubenswrapper[4712]: I0130 17:14:16.009590 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z82xs\" (UniqueName: \"kubernetes.io/projected/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-kube-api-access-z82xs\") pod \"root-account-create-update-2dg7m\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:16 crc kubenswrapper[4712]: I0130 17:14:16.110752 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:16 crc kubenswrapper[4712]: I0130 17:14:16.191250 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:16 crc kubenswrapper[4712]: E0130 17:14:16.191481 4712 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 17:14:16 crc kubenswrapper[4712]: E0130 17:14:16.191509 4712 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 17:14:16 crc kubenswrapper[4712]: E0130 17:14:16.191566 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift podName:b46c7f41-9ce5-4625-98d5-74bafa8bd0de nodeName:}" failed. No retries permitted until 2026-01-30 17:14:20.191548791 +0000 UTC m=+1197.098558260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift") pod "swift-storage-0" (UID: "b46c7f41-9ce5-4625-98d5-74bafa8bd0de") : configmap "swift-ring-files" not found Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.046164 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-2dg7m"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.381839 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-wv76z"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.382958 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.393233 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wv76z"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.430233 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-operator-scripts\") pod \"keystone-db-create-wv76z\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.430324 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49f48\" (UniqueName: \"kubernetes.io/projected/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-kube-api-access-49f48\") pod \"keystone-db-create-wv76z\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.435611 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2dg7m" event={"ID":"be0a59e5-e2ac-498a-9dbb-61dfd886ce38","Type":"ContainerStarted","Data":"0b8da8be5294af16dc372027943eb73c6f0adbfba94a362c430a5c105cb8ce35"} Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.435791 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2dg7m" event={"ID":"be0a59e5-e2ac-498a-9dbb-61dfd886ce38","Type":"ContainerStarted","Data":"72727c51b58caf4a3fa26b5b14c23f473aaca7fdfa97a8481dd2fedbfc89e7a1"} Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.461608 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-2dg7m" podStartSLOduration=3.461588497 podStartE2EDuration="3.461588497s" podCreationTimestamp="2026-01-30 17:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:14:18.453723548 +0000 UTC m=+1195.360733017" watchObservedRunningTime="2026-01-30 17:14:18.461588497 +0000 UTC m=+1195.368597966" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.493996 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.522316 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-73dc-account-create-update-675c8"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.523618 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.530023 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.532347 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-operator-scripts\") pod \"keystone-db-create-wv76z\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.532499 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49f48\" (UniqueName: \"kubernetes.io/projected/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-kube-api-access-49f48\") pod \"keystone-db-create-wv76z\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.534576 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-operator-scripts\") pod \"keystone-db-create-wv76z\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.568449 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-73dc-account-create-update-675c8"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.573670 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49f48\" (UniqueName: \"kubernetes.io/projected/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-kube-api-access-49f48\") pod \"keystone-db-create-wv76z\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.633961 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnm6h\" (UniqueName: \"kubernetes.io/projected/96165653-9d73-4013-afb2-f922fc4d1eed-kube-api-access-qnm6h\") pod \"keystone-73dc-account-create-update-675c8\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.634355 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96165653-9d73-4013-afb2-f922fc4d1eed-operator-scripts\") pod \"keystone-73dc-account-create-update-675c8\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.713062 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.736063 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnm6h\" (UniqueName: \"kubernetes.io/projected/96165653-9d73-4013-afb2-f922fc4d1eed-kube-api-access-qnm6h\") pod \"keystone-73dc-account-create-update-675c8\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.736397 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96165653-9d73-4013-afb2-f922fc4d1eed-operator-scripts\") pod \"keystone-73dc-account-create-update-675c8\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.737313 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96165653-9d73-4013-afb2-f922fc4d1eed-operator-scripts\") pod \"keystone-73dc-account-create-update-675c8\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.755760 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnm6h\" (UniqueName: \"kubernetes.io/projected/96165653-9d73-4013-afb2-f922fc4d1eed-kube-api-access-qnm6h\") pod \"keystone-73dc-account-create-update-675c8\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.790425 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-kvjrp"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.791710 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.804711 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kvjrp"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.838227 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e13e69b-0a9c-4100-a869-67d199b76f55-operator-scripts\") pod \"placement-db-create-kvjrp\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.838475 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwvpd\" (UniqueName: \"kubernetes.io/projected/3e13e69b-0a9c-4100-a869-67d199b76f55-kube-api-access-qwvpd\") pod \"placement-db-create-kvjrp\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.864865 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-55c7-account-create-update-kz29l"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.871941 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.875543 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.884059 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.903972 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55c7-account-create-update-kz29l"] Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.942056 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e13e69b-0a9c-4100-a869-67d199b76f55-operator-scripts\") pod \"placement-db-create-kvjrp\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.942127 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwvpd\" (UniqueName: \"kubernetes.io/projected/3e13e69b-0a9c-4100-a869-67d199b76f55-kube-api-access-qwvpd\") pod \"placement-db-create-kvjrp\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.942218 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-operator-scripts\") pod \"placement-55c7-account-create-update-kz29l\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.942264 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfstm\" (UniqueName: \"kubernetes.io/projected/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-kube-api-access-lfstm\") pod \"placement-55c7-account-create-update-kz29l\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.943366 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e13e69b-0a9c-4100-a869-67d199b76f55-operator-scripts\") pod \"placement-db-create-kvjrp\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:18 crc kubenswrapper[4712]: I0130 17:14:18.968970 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwvpd\" (UniqueName: \"kubernetes.io/projected/3e13e69b-0a9c-4100-a869-67d199b76f55-kube-api-access-qwvpd\") pod \"placement-db-create-kvjrp\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.043737 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-operator-scripts\") pod \"placement-55c7-account-create-update-kz29l\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.043901 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfstm\" (UniqueName: \"kubernetes.io/projected/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-kube-api-access-lfstm\") pod \"placement-55c7-account-create-update-kz29l\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.044842 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-operator-scripts\") pod \"placement-55c7-account-create-update-kz29l\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.126902 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfstm\" (UniqueName: \"kubernetes.io/projected/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-kube-api-access-lfstm\") pod \"placement-55c7-account-create-update-kz29l\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.155561 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.158441 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-c85vb"] Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.159772 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.178554 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c85vb"] Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.255037 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pblm\" (UniqueName: \"kubernetes.io/projected/40f78f2d-d7fe-4199-853a-b45c352c93a5-kube-api-access-4pblm\") pod \"glance-db-create-c85vb\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.255114 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f78f2d-d7fe-4199-853a-b45c352c93a5-operator-scripts\") pod \"glance-db-create-c85vb\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.280730 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-be6c-account-create-update-x29l7"] Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.281897 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.286158 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.286624 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.290747 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-be6c-account-create-update-x29l7"] Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.356227 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f78f2d-d7fe-4199-853a-b45c352c93a5-operator-scripts\") pod \"glance-db-create-c85vb\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.356289 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-operator-scripts\") pod \"glance-be6c-account-create-update-x29l7\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.356386 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chs8n\" (UniqueName: \"kubernetes.io/projected/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-kube-api-access-chs8n\") pod \"glance-be6c-account-create-update-x29l7\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.356457 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4pblm\" (UniqueName: \"kubernetes.io/projected/40f78f2d-d7fe-4199-853a-b45c352c93a5-kube-api-access-4pblm\") pod \"glance-db-create-c85vb\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.357330 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f78f2d-d7fe-4199-853a-b45c352c93a5-operator-scripts\") pod \"glance-db-create-c85vb\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.376400 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4pblm\" (UniqueName: \"kubernetes.io/projected/40f78f2d-d7fe-4199-853a-b45c352c93a5-kube-api-access-4pblm\") pod \"glance-db-create-c85vb\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.453038 4712 generic.go:334] "Generic (PLEG): container finished" podID="be0a59e5-e2ac-498a-9dbb-61dfd886ce38" containerID="0b8da8be5294af16dc372027943eb73c6f0adbfba94a362c430a5c105cb8ce35" exitCode=0 Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.453508 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2dg7m" event={"ID":"be0a59e5-e2ac-498a-9dbb-61dfd886ce38","Type":"ContainerDied","Data":"0b8da8be5294af16dc372027943eb73c6f0adbfba94a362c430a5c105cb8ce35"} Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.458916 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9fw4k" event={"ID":"b6cda925-aa9c-401f-90bb-158535201367","Type":"ContainerStarted","Data":"ed108d5288775912260bfe543e6779ad396fbd11bd6fa08fe9ec6e4ac29ae508"} Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.460112 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-operator-scripts\") pod \"glance-be6c-account-create-update-x29l7\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.460281 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chs8n\" (UniqueName: \"kubernetes.io/projected/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-kube-api-access-chs8n\") pod \"glance-be6c-account-create-update-x29l7\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.461393 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-operator-scripts\") pod \"glance-be6c-account-create-update-x29l7\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.468880 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-wv76z"] Jan 30 17:14:19 crc kubenswrapper[4712]: W0130 17:14:19.470618 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdb21ea_b35a_4413_b25a_f8e0fcf10c13.slice/crio-715e020c92472b2f898313a61be5d24dfbad21adc96bad7c7dd4eed8976a3b53 WatchSource:0}: Error finding container 715e020c92472b2f898313a61be5d24dfbad21adc96bad7c7dd4eed8976a3b53: Status 404 returned error can't find the container with id 715e020c92472b2f898313a61be5d24dfbad21adc96bad7c7dd4eed8976a3b53 Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.489366 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chs8n\" (UniqueName: \"kubernetes.io/projected/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-kube-api-access-chs8n\") pod \"glance-be6c-account-create-update-x29l7\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.498484 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c85vb" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.506180 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-9fw4k" podStartSLOduration=2.944992236 podStartE2EDuration="7.506157216s" podCreationTimestamp="2026-01-30 17:14:12 +0000 UTC" firstStartedPulling="2026-01-30 17:14:13.86600557 +0000 UTC m=+1190.773015039" lastFinishedPulling="2026-01-30 17:14:18.42717055 +0000 UTC m=+1195.334180019" observedRunningTime="2026-01-30 17:14:19.498057131 +0000 UTC m=+1196.405066600" watchObservedRunningTime="2026-01-30 17:14:19.506157216 +0000 UTC m=+1196.413166685" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.605663 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-73dc-account-create-update-675c8"] Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.606117 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.738994 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-kvjrp"] Jan 30 17:14:19 crc kubenswrapper[4712]: I0130 17:14:19.851754 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-55c7-account-create-update-kz29l"] Jan 30 17:14:20 crc kubenswrapper[4712]: E0130 17:14:20.087588 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podafdb21ea_b35a_4413_b25a_f8e0fcf10c13.slice/crio-5c423648df8ac34e33c902d64166e29dfe51de0b8347638f425a8fce7cdc5e66.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.097499 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-c85vb"] Jan 30 17:14:20 crc kubenswrapper[4712]: W0130 17:14:20.107071 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40f78f2d_d7fe_4199_853a_b45c352c93a5.slice/crio-4422ca86d3b41e79f2b7613a66b6e0f35e2c52c85a2d69fe08c3790a3ff66683 WatchSource:0}: Error finding container 4422ca86d3b41e79f2b7613a66b6e0f35e2c52c85a2d69fe08c3790a3ff66683: Status 404 returned error can't find the container with id 4422ca86d3b41e79f2b7613a66b6e0f35e2c52c85a2d69fe08c3790a3ff66683 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.254293 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-be6c-account-create-update-x29l7"] Jan 30 17:14:20 crc kubenswrapper[4712]: W0130 17:14:20.255082 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0ae6aebe_2ff5_42e3_bfd1_48b0b2b579c8.slice/crio-a1de635e16e2a4f6fb9aa607c82093ae50c33d53888882ffa521a73624b0eed1 WatchSource:0}: Error finding container a1de635e16e2a4f6fb9aa607c82093ae50c33d53888882ffa521a73624b0eed1: Status 404 returned error can't find the container with id a1de635e16e2a4f6fb9aa607c82093ae50c33d53888882ffa521a73624b0eed1 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.274560 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:20 crc kubenswrapper[4712]: E0130 17:14:20.274808 4712 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 17:14:20 crc kubenswrapper[4712]: E0130 17:14:20.274844 4712 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 17:14:20 crc kubenswrapper[4712]: E0130 17:14:20.274908 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift podName:b46c7f41-9ce5-4625-98d5-74bafa8bd0de nodeName:}" failed. No retries permitted until 2026-01-30 17:14:28.274888315 +0000 UTC m=+1205.181897784 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift") pod "swift-storage-0" (UID: "b46c7f41-9ce5-4625-98d5-74bafa8bd0de") : configmap "swift-ring-files" not found Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.467224 4712 generic.go:334] "Generic (PLEG): container finished" podID="3a6d2018-2c94-4c5f-8a8a-03c69bfac444" containerID="84e344f6c464576030ecbde14be96325e966b006ad94b1a3323a65a08650dfdb" exitCode=0 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.467310 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55c7-account-create-update-kz29l" event={"ID":"3a6d2018-2c94-4c5f-8a8a-03c69bfac444","Type":"ContainerDied","Data":"84e344f6c464576030ecbde14be96325e966b006ad94b1a3323a65a08650dfdb"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.467340 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55c7-account-create-update-kz29l" event={"ID":"3a6d2018-2c94-4c5f-8a8a-03c69bfac444","Type":"ContainerStarted","Data":"c547263736448d857113f1baf324708a1da1f2a3bc77012a04c7ea85bf182a6b"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.468749 4712 generic.go:334] "Generic (PLEG): container finished" podID="afdb21ea-b35a-4413-b25a-f8e0fcf10c13" containerID="5c423648df8ac34e33c902d64166e29dfe51de0b8347638f425a8fce7cdc5e66" exitCode=0 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.468820 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wv76z" event={"ID":"afdb21ea-b35a-4413-b25a-f8e0fcf10c13","Type":"ContainerDied","Data":"5c423648df8ac34e33c902d64166e29dfe51de0b8347638f425a8fce7cdc5e66"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.468847 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wv76z" event={"ID":"afdb21ea-b35a-4413-b25a-f8e0fcf10c13","Type":"ContainerStarted","Data":"715e020c92472b2f898313a61be5d24dfbad21adc96bad7c7dd4eed8976a3b53"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.471898 4712 generic.go:334] "Generic (PLEG): container finished" podID="3e13e69b-0a9c-4100-a869-67d199b76f55" containerID="71faa3f91d5802f8b121e02f21a587237bcc60ee3b6089b689c551ae42bb7afe" exitCode=0 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.471953 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kvjrp" event={"ID":"3e13e69b-0a9c-4100-a869-67d199b76f55","Type":"ContainerDied","Data":"71faa3f91d5802f8b121e02f21a587237bcc60ee3b6089b689c551ae42bb7afe"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.471972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kvjrp" event={"ID":"3e13e69b-0a9c-4100-a869-67d199b76f55","Type":"ContainerStarted","Data":"3ed9cd4cbfed30669200d9372ca273209d8c7bdfc26642962828749bb9a7757d"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.483445 4712 generic.go:334] "Generic (PLEG): container finished" podID="40f78f2d-d7fe-4199-853a-b45c352c93a5" containerID="517e6c5cc9aab763664b393dad4c13bef938cd575257263639432c3808c2c01f" exitCode=0 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.483697 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c85vb" event={"ID":"40f78f2d-d7fe-4199-853a-b45c352c93a5","Type":"ContainerDied","Data":"517e6c5cc9aab763664b393dad4c13bef938cd575257263639432c3808c2c01f"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.483729 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c85vb" event={"ID":"40f78f2d-d7fe-4199-853a-b45c352c93a5","Type":"ContainerStarted","Data":"4422ca86d3b41e79f2b7613a66b6e0f35e2c52c85a2d69fe08c3790a3ff66683"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.488863 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be6c-account-create-update-x29l7" event={"ID":"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8","Type":"ContainerStarted","Data":"d48b19835ff38127cc7b74972b89601c860cb22bb181c5259c48cde4d7f18bc6"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.488926 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be6c-account-create-update-x29l7" event={"ID":"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8","Type":"ContainerStarted","Data":"a1de635e16e2a4f6fb9aa607c82093ae50c33d53888882ffa521a73624b0eed1"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.494651 4712 generic.go:334] "Generic (PLEG): container finished" podID="96165653-9d73-4013-afb2-f922fc4d1eed" containerID="df0a7af201ffa9e1d7e8047915d9fdc09b3789563ee68287e2c4ef43a9ec650a" exitCode=0 Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.496186 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-73dc-account-create-update-675c8" event={"ID":"96165653-9d73-4013-afb2-f922fc4d1eed","Type":"ContainerDied","Data":"df0a7af201ffa9e1d7e8047915d9fdc09b3789563ee68287e2c4ef43a9ec650a"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.496225 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-73dc-account-create-update-675c8" event={"ID":"96165653-9d73-4013-afb2-f922fc4d1eed","Type":"ContainerStarted","Data":"c6d5ab21e52f2d70d8da4879548dbb6653621bd6f6f40ed1849901eaba619350"} Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.535114 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-be6c-account-create-update-x29l7" podStartSLOduration=1.535092429 podStartE2EDuration="1.535092429s" podCreationTimestamp="2026-01-30 17:14:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:14:20.506674606 +0000 UTC m=+1197.413684095" watchObservedRunningTime="2026-01-30 17:14:20.535092429 +0000 UTC m=+1197.442101898" Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.880163 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.986899 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-operator-scripts\") pod \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.987437 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be0a59e5-e2ac-498a-9dbb-61dfd886ce38" (UID: "be0a59e5-e2ac-498a-9dbb-61dfd886ce38"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.987711 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z82xs\" (UniqueName: \"kubernetes.io/projected/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-kube-api-access-z82xs\") pod \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\" (UID: \"be0a59e5-e2ac-498a-9dbb-61dfd886ce38\") " Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.988191 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:20 crc kubenswrapper[4712]: I0130 17:14:20.995591 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-kube-api-access-z82xs" (OuterVolumeSpecName: "kube-api-access-z82xs") pod "be0a59e5-e2ac-498a-9dbb-61dfd886ce38" (UID: "be0a59e5-e2ac-498a-9dbb-61dfd886ce38"). InnerVolumeSpecName "kube-api-access-z82xs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.089880 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z82xs\" (UniqueName: \"kubernetes.io/projected/be0a59e5-e2ac-498a-9dbb-61dfd886ce38-kube-api-access-z82xs\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.271296 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.519624 4712 generic.go:334] "Generic (PLEG): container finished" podID="0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" containerID="d48b19835ff38127cc7b74972b89601c860cb22bb181c5259c48cde4d7f18bc6" exitCode=0 Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.519693 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be6c-account-create-update-x29l7" event={"ID":"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8","Type":"ContainerDied","Data":"d48b19835ff38127cc7b74972b89601c860cb22bb181c5259c48cde4d7f18bc6"} Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.523878 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-2dg7m" Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.524030 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-2dg7m" event={"ID":"be0a59e5-e2ac-498a-9dbb-61dfd886ce38","Type":"ContainerDied","Data":"72727c51b58caf4a3fa26b5b14c23f473aaca7fdfa97a8481dd2fedbfc89e7a1"} Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.524274 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72727c51b58caf4a3fa26b5b14c23f473aaca7fdfa97a8481dd2fedbfc89e7a1" Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.561155 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.619401 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mrlvk"] Jan 30 17:14:21 crc kubenswrapper[4712]: I0130 17:14:21.619903 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerName="dnsmasq-dns" containerID="cri-o://ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7" gracePeriod=10 Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.101182 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c85vb" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.107427 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.113880 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.136908 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.210968 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.217708 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfstm\" (UniqueName: \"kubernetes.io/projected/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-kube-api-access-lfstm\") pod \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.217783 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f78f2d-d7fe-4199-853a-b45c352c93a5-operator-scripts\") pod \"40f78f2d-d7fe-4199-853a-b45c352c93a5\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.217831 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49f48\" (UniqueName: \"kubernetes.io/projected/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-kube-api-access-49f48\") pod \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.217911 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-operator-scripts\") pod \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\" (UID: \"3a6d2018-2c94-4c5f-8a8a-03c69bfac444\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.218079 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4pblm\" (UniqueName: \"kubernetes.io/projected/40f78f2d-d7fe-4199-853a-b45c352c93a5-kube-api-access-4pblm\") pod \"40f78f2d-d7fe-4199-853a-b45c352c93a5\" (UID: \"40f78f2d-d7fe-4199-853a-b45c352c93a5\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.218144 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-operator-scripts\") pod \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\" (UID: \"afdb21ea-b35a-4413-b25a-f8e0fcf10c13\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.219945 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afdb21ea-b35a-4413-b25a-f8e0fcf10c13" (UID: "afdb21ea-b35a-4413-b25a-f8e0fcf10c13"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.223132 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/40f78f2d-d7fe-4199-853a-b45c352c93a5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "40f78f2d-d7fe-4199-853a-b45c352c93a5" (UID: "40f78f2d-d7fe-4199-853a-b45c352c93a5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.223582 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3a6d2018-2c94-4c5f-8a8a-03c69bfac444" (UID: "3a6d2018-2c94-4c5f-8a8a-03c69bfac444"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.227318 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-kube-api-access-lfstm" (OuterVolumeSpecName: "kube-api-access-lfstm") pod "3a6d2018-2c94-4c5f-8a8a-03c69bfac444" (UID: "3a6d2018-2c94-4c5f-8a8a-03c69bfac444"). InnerVolumeSpecName "kube-api-access-lfstm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.231169 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-kube-api-access-49f48" (OuterVolumeSpecName: "kube-api-access-49f48") pod "afdb21ea-b35a-4413-b25a-f8e0fcf10c13" (UID: "afdb21ea-b35a-4413-b25a-f8e0fcf10c13"). InnerVolumeSpecName "kube-api-access-49f48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.232596 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40f78f2d-d7fe-4199-853a-b45c352c93a5-kube-api-access-4pblm" (OuterVolumeSpecName: "kube-api-access-4pblm") pod "40f78f2d-d7fe-4199-853a-b45c352c93a5" (UID: "40f78f2d-d7fe-4199-853a-b45c352c93a5"). InnerVolumeSpecName "kube-api-access-4pblm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.292133 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-2dg7m"] Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.298534 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-2dg7m"] Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319169 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnm6h\" (UniqueName: \"kubernetes.io/projected/96165653-9d73-4013-afb2-f922fc4d1eed-kube-api-access-qnm6h\") pod \"96165653-9d73-4013-afb2-f922fc4d1eed\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319282 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e13e69b-0a9c-4100-a869-67d199b76f55-operator-scripts\") pod \"3e13e69b-0a9c-4100-a869-67d199b76f55\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319328 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwvpd\" (UniqueName: \"kubernetes.io/projected/3e13e69b-0a9c-4100-a869-67d199b76f55-kube-api-access-qwvpd\") pod \"3e13e69b-0a9c-4100-a869-67d199b76f55\" (UID: \"3e13e69b-0a9c-4100-a869-67d199b76f55\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319447 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96165653-9d73-4013-afb2-f922fc4d1eed-operator-scripts\") pod \"96165653-9d73-4013-afb2-f922fc4d1eed\" (UID: \"96165653-9d73-4013-afb2-f922fc4d1eed\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319904 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfstm\" (UniqueName: \"kubernetes.io/projected/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-kube-api-access-lfstm\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319930 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/40f78f2d-d7fe-4199-853a-b45c352c93a5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319943 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49f48\" (UniqueName: \"kubernetes.io/projected/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-kube-api-access-49f48\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319956 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3a6d2018-2c94-4c5f-8a8a-03c69bfac444-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319967 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4pblm\" (UniqueName: \"kubernetes.io/projected/40f78f2d-d7fe-4199-853a-b45c352c93a5-kube-api-access-4pblm\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.319978 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afdb21ea-b35a-4413-b25a-f8e0fcf10c13-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.320204 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e13e69b-0a9c-4100-a869-67d199b76f55-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3e13e69b-0a9c-4100-a869-67d199b76f55" (UID: "3e13e69b-0a9c-4100-a869-67d199b76f55"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.320318 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96165653-9d73-4013-afb2-f922fc4d1eed-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "96165653-9d73-4013-afb2-f922fc4d1eed" (UID: "96165653-9d73-4013-afb2-f922fc4d1eed"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.324549 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e13e69b-0a9c-4100-a869-67d199b76f55-kube-api-access-qwvpd" (OuterVolumeSpecName: "kube-api-access-qwvpd") pod "3e13e69b-0a9c-4100-a869-67d199b76f55" (UID: "3e13e69b-0a9c-4100-a869-67d199b76f55"). InnerVolumeSpecName "kube-api-access-qwvpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.327233 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96165653-9d73-4013-afb2-f922fc4d1eed-kube-api-access-qnm6h" (OuterVolumeSpecName: "kube-api-access-qnm6h") pod "96165653-9d73-4013-afb2-f922fc4d1eed" (UID: "96165653-9d73-4013-afb2-f922fc4d1eed"). InnerVolumeSpecName "kube-api-access-qnm6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.386053 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.423792 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnm6h\" (UniqueName: \"kubernetes.io/projected/96165653-9d73-4013-afb2-f922fc4d1eed-kube-api-access-qnm6h\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.423843 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3e13e69b-0a9c-4100-a869-67d199b76f55-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.423854 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwvpd\" (UniqueName: \"kubernetes.io/projected/3e13e69b-0a9c-4100-a869-67d199b76f55-kube-api-access-qwvpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.423864 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/96165653-9d73-4013-afb2-f922fc4d1eed-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.525045 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-config\") pod \"3931ce9e-e449-4e24-b826-0d78e42d0b52\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.525208 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjgz8\" (UniqueName: \"kubernetes.io/projected/3931ce9e-e449-4e24-b826-0d78e42d0b52-kube-api-access-tjgz8\") pod \"3931ce9e-e449-4e24-b826-0d78e42d0b52\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.525236 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-sb\") pod \"3931ce9e-e449-4e24-b826-0d78e42d0b52\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.525335 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-dns-svc\") pod \"3931ce9e-e449-4e24-b826-0d78e42d0b52\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.525383 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-nb\") pod \"3931ce9e-e449-4e24-b826-0d78e42d0b52\" (UID: \"3931ce9e-e449-4e24-b826-0d78e42d0b52\") " Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.537722 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3931ce9e-e449-4e24-b826-0d78e42d0b52-kube-api-access-tjgz8" (OuterVolumeSpecName: "kube-api-access-tjgz8") pod "3931ce9e-e449-4e24-b826-0d78e42d0b52" (UID: "3931ce9e-e449-4e24-b826-0d78e42d0b52"). InnerVolumeSpecName "kube-api-access-tjgz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.549307 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-73dc-account-create-update-675c8" event={"ID":"96165653-9d73-4013-afb2-f922fc4d1eed","Type":"ContainerDied","Data":"c6d5ab21e52f2d70d8da4879548dbb6653621bd6f6f40ed1849901eaba619350"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.549352 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6d5ab21e52f2d70d8da4879548dbb6653621bd6f6f40ed1849901eaba619350" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.549815 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-73dc-account-create-update-675c8" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.552406 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-55c7-account-create-update-kz29l" event={"ID":"3a6d2018-2c94-4c5f-8a8a-03c69bfac444","Type":"ContainerDied","Data":"c547263736448d857113f1baf324708a1da1f2a3bc77012a04c7ea85bf182a6b"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.552449 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c547263736448d857113f1baf324708a1da1f2a3bc77012a04c7ea85bf182a6b" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.552510 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-55c7-account-create-update-kz29l" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.565117 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-wv76z" event={"ID":"afdb21ea-b35a-4413-b25a-f8e0fcf10c13","Type":"ContainerDied","Data":"715e020c92472b2f898313a61be5d24dfbad21adc96bad7c7dd4eed8976a3b53"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.565163 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="715e020c92472b2f898313a61be5d24dfbad21adc96bad7c7dd4eed8976a3b53" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.565250 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-wv76z" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.571175 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-kvjrp" event={"ID":"3e13e69b-0a9c-4100-a869-67d199b76f55","Type":"ContainerDied","Data":"3ed9cd4cbfed30669200d9372ca273209d8c7bdfc26642962828749bb9a7757d"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.571211 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ed9cd4cbfed30669200d9372ca273209d8c7bdfc26642962828749bb9a7757d" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.571207 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-kvjrp" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.579329 4712 generic.go:334] "Generic (PLEG): container finished" podID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerID="ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7" exitCode=0 Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.579392 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" event={"ID":"3931ce9e-e449-4e24-b826-0d78e42d0b52","Type":"ContainerDied","Data":"ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.579423 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" event={"ID":"3931ce9e-e449-4e24-b826-0d78e42d0b52","Type":"ContainerDied","Data":"fded3081b0168713cbdcd37ca0e5bb21770c89f89e617a10531ef7702780137f"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.579488 4712 scope.go:117] "RemoveContainer" containerID="ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.579628 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-mrlvk" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.588589 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-c85vb" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.588659 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-c85vb" event={"ID":"40f78f2d-d7fe-4199-853a-b45c352c93a5","Type":"ContainerDied","Data":"4422ca86d3b41e79f2b7613a66b6e0f35e2c52c85a2d69fe08c3790a3ff66683"} Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.588697 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4422ca86d3b41e79f2b7613a66b6e0f35e2c52c85a2d69fe08c3790a3ff66683" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.598779 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-config" (OuterVolumeSpecName: "config") pod "3931ce9e-e449-4e24-b826-0d78e42d0b52" (UID: "3931ce9e-e449-4e24-b826-0d78e42d0b52"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.610660 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3931ce9e-e449-4e24-b826-0d78e42d0b52" (UID: "3931ce9e-e449-4e24-b826-0d78e42d0b52"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.613598 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3931ce9e-e449-4e24-b826-0d78e42d0b52" (UID: "3931ce9e-e449-4e24-b826-0d78e42d0b52"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.617716 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3931ce9e-e449-4e24-b826-0d78e42d0b52" (UID: "3931ce9e-e449-4e24-b826-0d78e42d0b52"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.619865 4712 scope.go:117] "RemoveContainer" containerID="868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.626972 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.627005 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.627017 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.627029 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjgz8\" (UniqueName: \"kubernetes.io/projected/3931ce9e-e449-4e24-b826-0d78e42d0b52-kube-api-access-tjgz8\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.627041 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3931ce9e-e449-4e24-b826-0d78e42d0b52-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.643250 4712 scope.go:117] "RemoveContainer" containerID="ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7" Jan 30 17:14:22 crc kubenswrapper[4712]: E0130 17:14:22.645832 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7\": container with ID starting with ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7 not found: ID does not exist" containerID="ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.645879 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7"} err="failed to get container status \"ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7\": rpc error: code = NotFound desc = could not find container \"ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7\": container with ID starting with ecb11af706d96cdb7575c5d704f2a66c9a511f5773a9ea1a64cf54f7cdff62d7 not found: ID does not exist" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.645907 4712 scope.go:117] "RemoveContainer" containerID="868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3" Jan 30 17:14:22 crc kubenswrapper[4712]: E0130 17:14:22.646438 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3\": container with ID starting with 868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3 not found: ID does not exist" containerID="868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.646469 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3"} err="failed to get container status \"868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3\": rpc error: code = NotFound desc = could not find container \"868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3\": container with ID starting with 868b2484a897a37242e37ee0e0479cf6b14c9bf9f5210e9fdc12de3962137cb3 not found: ID does not exist" Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.919077 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mrlvk"] Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.928852 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-mrlvk"] Jan 30 17:14:22 crc kubenswrapper[4712]: I0130 17:14:22.971330 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.137902 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-operator-scripts\") pod \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.138068 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chs8n\" (UniqueName: \"kubernetes.io/projected/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-kube-api-access-chs8n\") pod \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\" (UID: \"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8\") " Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.138693 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" (UID: "0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.142096 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-kube-api-access-chs8n" (OuterVolumeSpecName: "kube-api-access-chs8n") pod "0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" (UID: "0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8"). InnerVolumeSpecName "kube-api-access-chs8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.240263 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chs8n\" (UniqueName: \"kubernetes.io/projected/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-kube-api-access-chs8n\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.240306 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.596730 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-be6c-account-create-update-x29l7" event={"ID":"0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8","Type":"ContainerDied","Data":"a1de635e16e2a4f6fb9aa607c82093ae50c33d53888882ffa521a73624b0eed1"} Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.596771 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1de635e16e2a4f6fb9aa607c82093ae50c33d53888882ffa521a73624b0eed1" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.596842 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-be6c-account-create-update-x29l7" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.809506 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" path="/var/lib/kubelet/pods/3931ce9e-e449-4e24-b826-0d78e42d0b52/volumes" Jan 30 17:14:23 crc kubenswrapper[4712]: I0130 17:14:23.810187 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be0a59e5-e2ac-498a-9dbb-61dfd886ce38" path="/var/lib/kubelet/pods/be0a59e5-e2ac-498a-9dbb-61dfd886ce38/volumes" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.654686 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-7v96g"] Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655593 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40f78f2d-d7fe-4199-853a-b45c352c93a5" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655611 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="40f78f2d-d7fe-4199-853a-b45c352c93a5" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655645 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerName="dnsmasq-dns" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655652 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerName="dnsmasq-dns" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655666 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e13e69b-0a9c-4100-a869-67d199b76f55" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655675 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e13e69b-0a9c-4100-a869-67d199b76f55" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655693 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be0a59e5-e2ac-498a-9dbb-61dfd886ce38" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655701 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0a59e5-e2ac-498a-9dbb-61dfd886ce38" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655711 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655718 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655731 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96165653-9d73-4013-afb2-f922fc4d1eed" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655738 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="96165653-9d73-4013-afb2-f922fc4d1eed" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655752 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afdb21ea-b35a-4413-b25a-f8e0fcf10c13" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655760 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="afdb21ea-b35a-4413-b25a-f8e0fcf10c13" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655773 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerName="init" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655781 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerName="init" Jan 30 17:14:24 crc kubenswrapper[4712]: E0130 17:14:24.655809 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a6d2018-2c94-4c5f-8a8a-03c69bfac444" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655818 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a6d2018-2c94-4c5f-8a8a-03c69bfac444" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.655996 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a6d2018-2c94-4c5f-8a8a-03c69bfac444" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656011 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e13e69b-0a9c-4100-a869-67d199b76f55" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656019 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656035 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="be0a59e5-e2ac-498a-9dbb-61dfd886ce38" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656046 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="96165653-9d73-4013-afb2-f922fc4d1eed" containerName="mariadb-account-create-update" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656059 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="afdb21ea-b35a-4413-b25a-f8e0fcf10c13" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656072 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3931ce9e-e449-4e24-b826-0d78e42d0b52" containerName="dnsmasq-dns" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656084 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="40f78f2d-d7fe-4199-853a-b45c352c93a5" containerName="mariadb-database-create" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.656696 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.659369 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.661153 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q5zp4" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.666713 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7v96g"] Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.773086 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-combined-ca-bundle\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.773172 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-config-data\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.773234 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-db-sync-config-data\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.773338 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vh6wc\" (UniqueName: \"kubernetes.io/projected/a71905e7-0e29-40df-8d89-4a9a15cf0079-kube-api-access-vh6wc\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.874643 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-config-data\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.874723 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-db-sync-config-data\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.874824 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vh6wc\" (UniqueName: \"kubernetes.io/projected/a71905e7-0e29-40df-8d89-4a9a15cf0079-kube-api-access-vh6wc\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.874882 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-combined-ca-bundle\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.879327 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-db-sync-config-data\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.879412 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-config-data\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.882299 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-combined-ca-bundle\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.894647 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vh6wc\" (UniqueName: \"kubernetes.io/projected/a71905e7-0e29-40df-8d89-4a9a15cf0079-kube-api-access-vh6wc\") pod \"glance-db-sync-7v96g\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:24 crc kubenswrapper[4712]: I0130 17:14:24.972108 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7v96g" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.516465 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-7v96g"] Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.614106 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7v96g" event={"ID":"a71905e7-0e29-40df-8d89-4a9a15cf0079","Type":"ContainerStarted","Data":"324a4db10e4615a3c78d5f41d46575e6e4df7ce8baf472e39b0cc6089a5524e4"} Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.771622 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-pxqbg"] Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.772920 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.775481 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.789355 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pxqbg"] Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.889166 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd312cb-18f6-46d7-a783-5f242ae46a24-operator-scripts\") pod \"root-account-create-update-pxqbg\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.890291 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r7s2\" (UniqueName: \"kubernetes.io/projected/2fd312cb-18f6-46d7-a783-5f242ae46a24-kube-api-access-4r7s2\") pod \"root-account-create-update-pxqbg\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.992121 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd312cb-18f6-46d7-a783-5f242ae46a24-operator-scripts\") pod \"root-account-create-update-pxqbg\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.992176 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r7s2\" (UniqueName: \"kubernetes.io/projected/2fd312cb-18f6-46d7-a783-5f242ae46a24-kube-api-access-4r7s2\") pod \"root-account-create-update-pxqbg\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:25 crc kubenswrapper[4712]: I0130 17:14:25.994059 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd312cb-18f6-46d7-a783-5f242ae46a24-operator-scripts\") pod \"root-account-create-update-pxqbg\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:26 crc kubenswrapper[4712]: I0130 17:14:26.014104 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r7s2\" (UniqueName: \"kubernetes.io/projected/2fd312cb-18f6-46d7-a783-5f242ae46a24-kube-api-access-4r7s2\") pod \"root-account-create-update-pxqbg\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:26 crc kubenswrapper[4712]: I0130 17:14:26.149899 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:26 crc kubenswrapper[4712]: I0130 17:14:26.596642 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-pxqbg"] Jan 30 17:14:26 crc kubenswrapper[4712]: I0130 17:14:26.624628 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxqbg" event={"ID":"2fd312cb-18f6-46d7-a783-5f242ae46a24","Type":"ContainerStarted","Data":"ef13f9cbfa6943c532f623f9659d57186d574c55ce92284daf27849d71fc34ae"} Jan 30 17:14:27 crc kubenswrapper[4712]: I0130 17:14:27.633746 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxqbg" event={"ID":"2fd312cb-18f6-46d7-a783-5f242ae46a24","Type":"ContainerStarted","Data":"be0fb1fbda6d0f9e95cae83778f43fa6053be8953acb630f5bfdf0b3314d29af"} Jan 30 17:14:27 crc kubenswrapper[4712]: I0130 17:14:27.652726 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-pxqbg" podStartSLOduration=2.65270032 podStartE2EDuration="2.65270032s" podCreationTimestamp="2026-01-30 17:14:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:14:27.647117236 +0000 UTC m=+1204.554126695" watchObservedRunningTime="2026-01-30 17:14:27.65270032 +0000 UTC m=+1204.559709789" Jan 30 17:14:28 crc kubenswrapper[4712]: I0130 17:14:28.341436 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:28 crc kubenswrapper[4712]: E0130 17:14:28.341643 4712 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 30 17:14:28 crc kubenswrapper[4712]: E0130 17:14:28.341879 4712 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 30 17:14:28 crc kubenswrapper[4712]: E0130 17:14:28.341933 4712 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift podName:b46c7f41-9ce5-4625-98d5-74bafa8bd0de nodeName:}" failed. No retries permitted until 2026-01-30 17:14:44.341916528 +0000 UTC m=+1221.248926007 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift") pod "swift-storage-0" (UID: "b46c7f41-9ce5-4625-98d5-74bafa8bd0de") : configmap "swift-ring-files" not found Jan 30 17:14:28 crc kubenswrapper[4712]: I0130 17:14:28.957856 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 30 17:14:29 crc kubenswrapper[4712]: I0130 17:14:29.657873 4712 generic.go:334] "Generic (PLEG): container finished" podID="b6cda925-aa9c-401f-90bb-158535201367" containerID="ed108d5288775912260bfe543e6779ad396fbd11bd6fa08fe9ec6e4ac29ae508" exitCode=0 Jan 30 17:14:29 crc kubenswrapper[4712]: I0130 17:14:29.657968 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9fw4k" event={"ID":"b6cda925-aa9c-401f-90bb-158535201367","Type":"ContainerDied","Data":"ed108d5288775912260bfe543e6779ad396fbd11bd6fa08fe9ec6e4ac29ae508"} Jan 30 17:14:29 crc kubenswrapper[4712]: I0130 17:14:29.660323 4712 generic.go:334] "Generic (PLEG): container finished" podID="2fd312cb-18f6-46d7-a783-5f242ae46a24" containerID="be0fb1fbda6d0f9e95cae83778f43fa6053be8953acb630f5bfdf0b3314d29af" exitCode=0 Jan 30 17:14:29 crc kubenswrapper[4712]: I0130 17:14:29.660362 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxqbg" event={"ID":"2fd312cb-18f6-46d7-a783-5f242ae46a24","Type":"ContainerDied","Data":"be0fb1fbda6d0f9e95cae83778f43fa6053be8953acb630f5bfdf0b3314d29af"} Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.083013 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.092657 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd312cb-18f6-46d7-a783-5f242ae46a24-operator-scripts\") pod \"2fd312cb-18f6-46d7-a783-5f242ae46a24\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.092756 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r7s2\" (UniqueName: \"kubernetes.io/projected/2fd312cb-18f6-46d7-a783-5f242ae46a24-kube-api-access-4r7s2\") pod \"2fd312cb-18f6-46d7-a783-5f242ae46a24\" (UID: \"2fd312cb-18f6-46d7-a783-5f242ae46a24\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.093478 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2fd312cb-18f6-46d7-a783-5f242ae46a24-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2fd312cb-18f6-46d7-a783-5f242ae46a24" (UID: "2fd312cb-18f6-46d7-a783-5f242ae46a24"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.093984 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2fd312cb-18f6-46d7-a783-5f242ae46a24-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.100281 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.109229 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fd312cb-18f6-46d7-a783-5f242ae46a24-kube-api-access-4r7s2" (OuterVolumeSpecName: "kube-api-access-4r7s2") pod "2fd312cb-18f6-46d7-a783-5f242ae46a24" (UID: "2fd312cb-18f6-46d7-a783-5f242ae46a24"). InnerVolumeSpecName "kube-api-access-4r7s2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.194678 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-dispersionconf\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.194726 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-combined-ca-bundle\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.194813 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-swiftconf\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.194850 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6cda925-aa9c-401f-90bb-158535201367-etc-swift\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.194872 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzp8j\" (UniqueName: \"kubernetes.io/projected/b6cda925-aa9c-401f-90bb-158535201367-kube-api-access-fzp8j\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.194939 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-ring-data-devices\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.195002 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-scripts\") pod \"b6cda925-aa9c-401f-90bb-158535201367\" (UID: \"b6cda925-aa9c-401f-90bb-158535201367\") " Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.195340 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r7s2\" (UniqueName: \"kubernetes.io/projected/2fd312cb-18f6-46d7-a783-5f242ae46a24-kube-api-access-4r7s2\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.198373 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.198432 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cda925-aa9c-401f-90bb-158535201367-kube-api-access-fzp8j" (OuterVolumeSpecName: "kube-api-access-fzp8j") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "kube-api-access-fzp8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.201969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.206370 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6cda925-aa9c-401f-90bb-158535201367-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.218439 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-scripts" (OuterVolumeSpecName: "scripts") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.225692 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.232761 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6cda925-aa9c-401f-90bb-158535201367" (UID: "b6cda925-aa9c-401f-90bb-158535201367"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297682 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297728 4712 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297738 4712 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/b6cda925-aa9c-401f-90bb-158535201367-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297749 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzp8j\" (UniqueName: \"kubernetes.io/projected/b6cda925-aa9c-401f-90bb-158535201367-kube-api-access-fzp8j\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297762 4712 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297775 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b6cda925-aa9c-401f-90bb-158535201367-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.297784 4712 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/b6cda925-aa9c-401f-90bb-158535201367-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.674900 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-pxqbg" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.674914 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-pxqbg" event={"ID":"2fd312cb-18f6-46d7-a783-5f242ae46a24","Type":"ContainerDied","Data":"ef13f9cbfa6943c532f623f9659d57186d574c55ce92284daf27849d71fc34ae"} Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.675262 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef13f9cbfa6943c532f623f9659d57186d574c55ce92284daf27849d71fc34ae" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.677025 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-9fw4k" event={"ID":"b6cda925-aa9c-401f-90bb-158535201367","Type":"ContainerDied","Data":"c058c7118ef9cf09c0556e46d14c2cc3d643f62de7e43ceaa6191f158d8481a5"} Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.677058 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c058c7118ef9cf09c0556e46d14c2cc3d643f62de7e43ceaa6191f158d8481a5" Jan 30 17:14:31 crc kubenswrapper[4712]: I0130 17:14:31.677120 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-9fw4k" Jan 30 17:14:32 crc kubenswrapper[4712]: I0130 17:14:32.301924 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-pxqbg"] Jan 30 17:14:32 crc kubenswrapper[4712]: I0130 17:14:32.308289 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-pxqbg"] Jan 30 17:14:32 crc kubenswrapper[4712]: I0130 17:14:32.691403 4712 generic.go:334] "Generic (PLEG): container finished" podID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerID="2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935" exitCode=0 Jan 30 17:14:32 crc kubenswrapper[4712]: I0130 17:14:32.691522 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01b5b85b-caea-4f70-a61f-875ed30f9e64","Type":"ContainerDied","Data":"2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935"} Jan 30 17:14:32 crc kubenswrapper[4712]: I0130 17:14:32.699989 4712 generic.go:334] "Generic (PLEG): container finished" podID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerID="a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c" exitCode=0 Jan 30 17:14:32 crc kubenswrapper[4712]: I0130 17:14:32.700286 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d5b67399-3a53-4694-8f1c-c04592426dcd","Type":"ContainerDied","Data":"a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c"} Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.714949 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d5b67399-3a53-4694-8f1c-c04592426dcd","Type":"ContainerStarted","Data":"30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77"} Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.715378 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.722305 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01b5b85b-caea-4f70-a61f-875ed30f9e64","Type":"ContainerStarted","Data":"ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042"} Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.722610 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.749184 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=38.647392847 podStartE2EDuration="1m9.749165815s" podCreationTimestamp="2026-01-30 17:13:24 +0000 UTC" firstStartedPulling="2026-01-30 17:13:26.813065633 +0000 UTC m=+1143.720075102" lastFinishedPulling="2026-01-30 17:13:57.914838601 +0000 UTC m=+1174.821848070" observedRunningTime="2026-01-30 17:14:33.741137563 +0000 UTC m=+1210.648147042" watchObservedRunningTime="2026-01-30 17:14:33.749165815 +0000 UTC m=+1210.656175294" Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.768647 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.500324357 podStartE2EDuration="1m9.768629713s" podCreationTimestamp="2026-01-30 17:13:24 +0000 UTC" firstStartedPulling="2026-01-30 17:13:26.481353445 +0000 UTC m=+1143.388362914" lastFinishedPulling="2026-01-30 17:13:57.749658801 +0000 UTC m=+1174.656668270" observedRunningTime="2026-01-30 17:14:33.76518771 +0000 UTC m=+1210.672197189" watchObservedRunningTime="2026-01-30 17:14:33.768629713 +0000 UTC m=+1210.675639182" Jan 30 17:14:33 crc kubenswrapper[4712]: I0130 17:14:33.843636 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fd312cb-18f6-46d7-a783-5f242ae46a24" path="/var/lib/kubelet/pods/2fd312cb-18f6-46d7-a783-5f242ae46a24/volumes" Jan 30 17:14:34 crc kubenswrapper[4712]: I0130 17:14:34.742443 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sr5tj" podUID="ce49eaf1-5cf3-4399-b2c9-c253df2440bd" containerName="ovn-controller" probeResult="failure" output=< Jan 30 17:14:34 crc kubenswrapper[4712]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 17:14:34 crc kubenswrapper[4712]: > Jan 30 17:14:34 crc kubenswrapper[4712]: I0130 17:14:34.748136 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:14:34 crc kubenswrapper[4712]: I0130 17:14:34.755012 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-qfgk4" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.016938 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sr5tj-config-zdzt8"] Jan 30 17:14:35 crc kubenswrapper[4712]: E0130 17:14:35.017602 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6cda925-aa9c-401f-90bb-158535201367" containerName="swift-ring-rebalance" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.017614 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6cda925-aa9c-401f-90bb-158535201367" containerName="swift-ring-rebalance" Jan 30 17:14:35 crc kubenswrapper[4712]: E0130 17:14:35.017636 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fd312cb-18f6-46d7-a783-5f242ae46a24" containerName="mariadb-account-create-update" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.017642 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fd312cb-18f6-46d7-a783-5f242ae46a24" containerName="mariadb-account-create-update" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.017789 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6cda925-aa9c-401f-90bb-158535201367" containerName="swift-ring-rebalance" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.017821 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fd312cb-18f6-46d7-a783-5f242ae46a24" containerName="mariadb-account-create-update" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.018340 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.020826 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.026411 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sr5tj-config-zdzt8"] Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.184464 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wkcc\" (UniqueName: \"kubernetes.io/projected/b0f49acb-f840-40e3-9a07-dd59301892db-kube-api-access-8wkcc\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.184531 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-additional-scripts\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.184687 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-log-ovn\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.184888 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.185239 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-scripts\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.185361 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run-ovn\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287420 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8wkcc\" (UniqueName: \"kubernetes.io/projected/b0f49acb-f840-40e3-9a07-dd59301892db-kube-api-access-8wkcc\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287501 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-additional-scripts\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287536 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-log-ovn\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287585 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287683 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-scripts\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287719 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run-ovn\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.287963 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-log-ovn\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.288090 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run-ovn\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.288400 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.288682 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-additional-scripts\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.290056 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-scripts\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.316547 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8wkcc\" (UniqueName: \"kubernetes.io/projected/b0f49acb-f840-40e3-9a07-dd59301892db-kube-api-access-8wkcc\") pod \"ovn-controller-sr5tj-config-zdzt8\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.341388 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.817306 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-tq4mn"] Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.821693 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.824261 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.900108 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/51b87601-d661-4138-a6f5-5871d5242dbc-kube-api-access-qhllt\") pod \"root-account-create-update-tq4mn\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.900179 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b87601-d661-4138-a6f5-5871d5242dbc-operator-scripts\") pod \"root-account-create-update-tq4mn\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:35 crc kubenswrapper[4712]: I0130 17:14:35.936331 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tq4mn"] Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.000995 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/51b87601-d661-4138-a6f5-5871d5242dbc-kube-api-access-qhllt\") pod \"root-account-create-update-tq4mn\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.001038 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b87601-d661-4138-a6f5-5871d5242dbc-operator-scripts\") pod \"root-account-create-update-tq4mn\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.001953 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b87601-d661-4138-a6f5-5871d5242dbc-operator-scripts\") pod \"root-account-create-update-tq4mn\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.025642 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/51b87601-d661-4138-a6f5-5871d5242dbc-kube-api-access-qhllt\") pod \"root-account-create-update-tq4mn\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.148531 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.273956 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.274015 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.274062 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.274741 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"65dbc6a56b610e6c479fb5dd8ad2aa9258f4202d2a0ef57103525088af93b4a2"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:14:36 crc kubenswrapper[4712]: I0130 17:14:36.274889 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://65dbc6a56b610e6c479fb5dd8ad2aa9258f4202d2a0ef57103525088af93b4a2" gracePeriod=600 Jan 30 17:14:37 crc kubenswrapper[4712]: I0130 17:14:37.765060 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="65dbc6a56b610e6c479fb5dd8ad2aa9258f4202d2a0ef57103525088af93b4a2" exitCode=0 Jan 30 17:14:37 crc kubenswrapper[4712]: I0130 17:14:37.765365 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"65dbc6a56b610e6c479fb5dd8ad2aa9258f4202d2a0ef57103525088af93b4a2"} Jan 30 17:14:37 crc kubenswrapper[4712]: I0130 17:14:37.765396 4712 scope.go:117] "RemoveContainer" containerID="1f74eb8e5d1037eaec314ae58dc333985d1e77823d3293834609e8af2e98478d" Jan 30 17:14:39 crc kubenswrapper[4712]: I0130 17:14:39.735832 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-sr5tj" podUID="ce49eaf1-5cf3-4399-b2c9-c253df2440bd" containerName="ovn-controller" probeResult="failure" output=< Jan 30 17:14:39 crc kubenswrapper[4712]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 30 17:14:39 crc kubenswrapper[4712]: > Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.052814 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-tq4mn"] Jan 30 17:14:43 crc kubenswrapper[4712]: W0130 17:14:43.060587 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod51b87601_d661_4138_a6f5_5871d5242dbc.slice/crio-de78f785d568b7939624927d58b799afac658d66a663f054312c05982cf663ba WatchSource:0}: Error finding container de78f785d568b7939624927d58b799afac658d66a663f054312c05982cf663ba: Status 404 returned error can't find the container with id de78f785d568b7939624927d58b799afac658d66a663f054312c05982cf663ba Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.133317 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sr5tj-config-zdzt8"] Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.828202 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"2b2080500e3e21108518c785b6a9d42dc4c1501c9ea170a8ffe8ca230910ec5c"} Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.835898 4712 generic.go:334] "Generic (PLEG): container finished" podID="b0f49acb-f840-40e3-9a07-dd59301892db" containerID="8558901fcf7ccc5260b00e0f57c2e854f5c9fcb998f2224c46191b11d94562a5" exitCode=0 Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.836103 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj-config-zdzt8" event={"ID":"b0f49acb-f840-40e3-9a07-dd59301892db","Type":"ContainerDied","Data":"8558901fcf7ccc5260b00e0f57c2e854f5c9fcb998f2224c46191b11d94562a5"} Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.836126 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj-config-zdzt8" event={"ID":"b0f49acb-f840-40e3-9a07-dd59301892db","Type":"ContainerStarted","Data":"caacda362e46ac2746bee82aa1deb7eff83cc610ce87661e621cba9457677179"} Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.839454 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7v96g" event={"ID":"a71905e7-0e29-40df-8d89-4a9a15cf0079","Type":"ContainerStarted","Data":"6827db413ce501836609062f68853422a888dfeffcce0c2fca3c7ec9cc0b9452"} Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.846484 4712 generic.go:334] "Generic (PLEG): container finished" podID="51b87601-d661-4138-a6f5-5871d5242dbc" containerID="694c98386931ad5c85c548dbbce61d4788ebed927a8acb5d0982c0fe4719f188" exitCode=0 Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.846536 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tq4mn" event={"ID":"51b87601-d661-4138-a6f5-5871d5242dbc","Type":"ContainerDied","Data":"694c98386931ad5c85c548dbbce61d4788ebed927a8acb5d0982c0fe4719f188"} Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.846561 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tq4mn" event={"ID":"51b87601-d661-4138-a6f5-5871d5242dbc","Type":"ContainerStarted","Data":"de78f785d568b7939624927d58b799afac658d66a663f054312c05982cf663ba"} Jan 30 17:14:43 crc kubenswrapper[4712]: I0130 17:14:43.936626 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-7v96g" podStartSLOduration=2.52786799 podStartE2EDuration="19.936607546s" podCreationTimestamp="2026-01-30 17:14:24 +0000 UTC" firstStartedPulling="2026-01-30 17:14:25.526131512 +0000 UTC m=+1202.433140981" lastFinishedPulling="2026-01-30 17:14:42.934871068 +0000 UTC m=+1219.841880537" observedRunningTime="2026-01-30 17:14:43.935791597 +0000 UTC m=+1220.842801066" watchObservedRunningTime="2026-01-30 17:14:43.936607546 +0000 UTC m=+1220.843617015" Jan 30 17:14:44 crc kubenswrapper[4712]: I0130 17:14:44.369943 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:44 crc kubenswrapper[4712]: I0130 17:14:44.397361 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/b46c7f41-9ce5-4625-98d5-74bafa8bd0de-etc-swift\") pod \"swift-storage-0\" (UID: \"b46c7f41-9ce5-4625-98d5-74bafa8bd0de\") " pod="openstack/swift-storage-0" Jan 30 17:14:44 crc kubenswrapper[4712]: I0130 17:14:44.682431 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 30 17:14:44 crc kubenswrapper[4712]: I0130 17:14:44.746083 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-sr5tj" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.366857 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.372920 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488476 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8wkcc\" (UniqueName: \"kubernetes.io/projected/b0f49acb-f840-40e3-9a07-dd59301892db-kube-api-access-8wkcc\") pod \"b0f49acb-f840-40e3-9a07-dd59301892db\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488556 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b87601-d661-4138-a6f5-5871d5242dbc-operator-scripts\") pod \"51b87601-d661-4138-a6f5-5871d5242dbc\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488602 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-additional-scripts\") pod \"b0f49acb-f840-40e3-9a07-dd59301892db\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488625 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-scripts\") pod \"b0f49acb-f840-40e3-9a07-dd59301892db\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488696 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/51b87601-d661-4138-a6f5-5871d5242dbc-kube-api-access-qhllt\") pod \"51b87601-d661-4138-a6f5-5871d5242dbc\" (UID: \"51b87601-d661-4138-a6f5-5871d5242dbc\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488718 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run\") pod \"b0f49acb-f840-40e3-9a07-dd59301892db\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488738 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-log-ovn\") pod \"b0f49acb-f840-40e3-9a07-dd59301892db\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488861 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run-ovn\") pod \"b0f49acb-f840-40e3-9a07-dd59301892db\" (UID: \"b0f49acb-f840-40e3-9a07-dd59301892db\") " Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488841 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run" (OuterVolumeSpecName: "var-run") pod "b0f49acb-f840-40e3-9a07-dd59301892db" (UID: "b0f49acb-f840-40e3-9a07-dd59301892db"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.488920 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "b0f49acb-f840-40e3-9a07-dd59301892db" (UID: "b0f49acb-f840-40e3-9a07-dd59301892db"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.489018 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "b0f49acb-f840-40e3-9a07-dd59301892db" (UID: "b0f49acb-f840-40e3-9a07-dd59301892db"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.489398 4712 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.489412 4712 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.489420 4712 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/b0f49acb-f840-40e3-9a07-dd59301892db-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.489730 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "b0f49acb-f840-40e3-9a07-dd59301892db" (UID: "b0f49acb-f840-40e3-9a07-dd59301892db"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.490128 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-scripts" (OuterVolumeSpecName: "scripts") pod "b0f49acb-f840-40e3-9a07-dd59301892db" (UID: "b0f49acb-f840-40e3-9a07-dd59301892db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.490512 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51b87601-d661-4138-a6f5-5871d5242dbc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "51b87601-d661-4138-a6f5-5871d5242dbc" (UID: "51b87601-d661-4138-a6f5-5871d5242dbc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.500116 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51b87601-d661-4138-a6f5-5871d5242dbc-kube-api-access-qhllt" (OuterVolumeSpecName: "kube-api-access-qhllt") pod "51b87601-d661-4138-a6f5-5871d5242dbc" (UID: "51b87601-d661-4138-a6f5-5871d5242dbc"). InnerVolumeSpecName "kube-api-access-qhllt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.500204 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0f49acb-f840-40e3-9a07-dd59301892db-kube-api-access-8wkcc" (OuterVolumeSpecName: "kube-api-access-8wkcc") pod "b0f49acb-f840-40e3-9a07-dd59301892db" (UID: "b0f49acb-f840-40e3-9a07-dd59301892db"). InnerVolumeSpecName "kube-api-access-8wkcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.590510 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8wkcc\" (UniqueName: \"kubernetes.io/projected/b0f49acb-f840-40e3-9a07-dd59301892db-kube-api-access-8wkcc\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.590540 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/51b87601-d661-4138-a6f5-5871d5242dbc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.590550 4712 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.590558 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b0f49acb-f840-40e3-9a07-dd59301892db-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.590568 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhllt\" (UniqueName: \"kubernetes.io/projected/51b87601-d661-4138-a6f5-5871d5242dbc-kube-api-access-qhllt\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.608950 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 30 17:14:45 crc kubenswrapper[4712]: W0130 17:14:45.613612 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb46c7f41_9ce5_4625_98d5_74bafa8bd0de.slice/crio-6b82d4b3111e633df943091525465ad99b45cd29600c4c9c52a502eb849c844a WatchSource:0}: Error finding container 6b82d4b3111e633df943091525465ad99b45cd29600c4c9c52a502eb849c844a: Status 404 returned error can't find the container with id 6b82d4b3111e633df943091525465ad99b45cd29600c4c9c52a502eb849c844a Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.821376 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.864336 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"6b82d4b3111e633df943091525465ad99b45cd29600c4c9c52a502eb849c844a"} Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.866079 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-tq4mn" event={"ID":"51b87601-d661-4138-a6f5-5871d5242dbc","Type":"ContainerDied","Data":"de78f785d568b7939624927d58b799afac658d66a663f054312c05982cf663ba"} Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.866102 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de78f785d568b7939624927d58b799afac658d66a663f054312c05982cf663ba" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.866127 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-tq4mn" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.867762 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj-config-zdzt8" event={"ID":"b0f49acb-f840-40e3-9a07-dd59301892db","Type":"ContainerDied","Data":"caacda362e46ac2746bee82aa1deb7eff83cc610ce87661e621cba9457677179"} Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.867783 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="caacda362e46ac2746bee82aa1deb7eff83cc610ce87661e621cba9457677179" Jan 30 17:14:45 crc kubenswrapper[4712]: I0130 17:14:45.867819 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-zdzt8" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.221784 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.504599 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sr5tj-config-zdzt8"] Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.517187 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sr5tj-config-zdzt8"] Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.638967 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-sr5tj-config-f7tk9"] Jan 30 17:14:46 crc kubenswrapper[4712]: E0130 17:14:46.639390 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0f49acb-f840-40e3-9a07-dd59301892db" containerName="ovn-config" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.639413 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0f49acb-f840-40e3-9a07-dd59301892db" containerName="ovn-config" Jan 30 17:14:46 crc kubenswrapper[4712]: E0130 17:14:46.639446 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51b87601-d661-4138-a6f5-5871d5242dbc" containerName="mariadb-account-create-update" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.639455 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="51b87601-d661-4138-a6f5-5871d5242dbc" containerName="mariadb-account-create-update" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.639663 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0f49acb-f840-40e3-9a07-dd59301892db" containerName="ovn-config" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.639686 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="51b87601-d661-4138-a6f5-5871d5242dbc" containerName="mariadb-account-create-update" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.640375 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.642240 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.655632 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sr5tj-config-f7tk9"] Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.808000 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run-ovn\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.808089 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.808119 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-log-ovn\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.808165 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-additional-scripts\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.808244 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqsp\" (UniqueName: \"kubernetes.io/projected/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-kube-api-access-pvqsp\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.808285 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-scripts\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.909368 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run-ovn\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.909463 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.909490 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-log-ovn\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.909516 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-additional-scripts\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.909578 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqsp\" (UniqueName: \"kubernetes.io/projected/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-kube-api-access-pvqsp\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.909609 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-scripts\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.910194 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-log-ovn\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.910255 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run-ovn\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.910456 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.911057 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-additional-scripts\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.911745 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-scripts\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.929742 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqsp\" (UniqueName: \"kubernetes.io/projected/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-kube-api-access-pvqsp\") pod \"ovn-controller-sr5tj-config-f7tk9\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:46 crc kubenswrapper[4712]: I0130 17:14:46.958615 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:47 crc kubenswrapper[4712]: I0130 17:14:47.312924 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-tq4mn"] Jan 30 17:14:47 crc kubenswrapper[4712]: I0130 17:14:47.318038 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-tq4mn"] Jan 30 17:14:47 crc kubenswrapper[4712]: I0130 17:14:47.486282 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-sr5tj-config-f7tk9"] Jan 30 17:14:47 crc kubenswrapper[4712]: W0130 17:14:47.763245 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f2eca0a_e459_4b1c_aacf_ca0035b9e8d1.slice/crio-80026c17ea7fafe2ec2f1472a7442c94cc9f1fddb99193cca12f70baca186372 WatchSource:0}: Error finding container 80026c17ea7fafe2ec2f1472a7442c94cc9f1fddb99193cca12f70baca186372: Status 404 returned error can't find the container with id 80026c17ea7fafe2ec2f1472a7442c94cc9f1fddb99193cca12f70baca186372 Jan 30 17:14:47 crc kubenswrapper[4712]: I0130 17:14:47.814995 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51b87601-d661-4138-a6f5-5871d5242dbc" path="/var/lib/kubelet/pods/51b87601-d661-4138-a6f5-5871d5242dbc/volumes" Jan 30 17:14:47 crc kubenswrapper[4712]: I0130 17:14:47.815670 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0f49acb-f840-40e3-9a07-dd59301892db" path="/var/lib/kubelet/pods/b0f49acb-f840-40e3-9a07-dd59301892db/volumes" Jan 30 17:14:47 crc kubenswrapper[4712]: I0130 17:14:47.887701 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj-config-f7tk9" event={"ID":"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1","Type":"ContainerStarted","Data":"80026c17ea7fafe2ec2f1472a7442c94cc9f1fddb99193cca12f70baca186372"} Jan 30 17:14:48 crc kubenswrapper[4712]: I0130 17:14:48.900719 4712 generic.go:334] "Generic (PLEG): container finished" podID="4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" containerID="1ede7b2f9b14ef955d37db9dfacc2cbd61eb73decc65aee52e83fe5bc65c747e" exitCode=0 Jan 30 17:14:48 crc kubenswrapper[4712]: I0130 17:14:48.901133 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj-config-f7tk9" event={"ID":"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1","Type":"ContainerDied","Data":"1ede7b2f9b14ef955d37db9dfacc2cbd61eb73decc65aee52e83fe5bc65c747e"} Jan 30 17:14:48 crc kubenswrapper[4712]: I0130 17:14:48.903350 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"fce8ccd1902a1bb55e4c7e3f977b03b51f3efa4bc8e9122f35bdaccf1d60b04f"} Jan 30 17:14:48 crc kubenswrapper[4712]: I0130 17:14:48.903377 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"982b4904b7414e2acc0fe3265183fc5ac1c822f56c229dcb13635678f7024e91"} Jan 30 17:14:48 crc kubenswrapper[4712]: I0130 17:14:48.903386 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"3c2ed5cc46d5df36cfcfd6d791c77e37a84b8894cc650d257b70b65320caed04"} Jan 30 17:14:48 crc kubenswrapper[4712]: I0130 17:14:48.903395 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"9435391ab139c35e65da0d52d5c54e97541b49d4c3034320c92deda11cb199d4"} Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.665091 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.774274 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-scripts\") pod \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.774591 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-log-ovn\") pod \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.774679 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" (UID: "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.774703 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvqsp\" (UniqueName: \"kubernetes.io/projected/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-kube-api-access-pvqsp\") pod \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.775303 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-additional-scripts\") pod \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.775395 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run-ovn\") pod \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.775450 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-scripts" (OuterVolumeSpecName: "scripts") pod "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" (UID: "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.775489 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" (UID: "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.775510 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run" (OuterVolumeSpecName: "var-run") pod "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" (UID: "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.775477 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run\") pod \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\" (UID: \"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1\") " Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.776186 4712 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.776205 4712 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.776214 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.776222 4712 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.776946 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" (UID: "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.780556 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-kube-api-access-pvqsp" (OuterVolumeSpecName: "kube-api-access-pvqsp") pod "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" (UID: "4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1"). InnerVolumeSpecName "kube-api-access-pvqsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.826684 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jwnth"] Jan 30 17:14:50 crc kubenswrapper[4712]: E0130 17:14:50.827057 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" containerName="ovn-config" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.827073 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" containerName="ovn-config" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.827375 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" containerName="ovn-config" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.827988 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.829946 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.838531 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jwnth"] Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.877738 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvqsp\" (UniqueName: \"kubernetes.io/projected/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-kube-api-access-pvqsp\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.877772 4712 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.921634 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-sr5tj-config-f7tk9" event={"ID":"4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1","Type":"ContainerDied","Data":"80026c17ea7fafe2ec2f1472a7442c94cc9f1fddb99193cca12f70baca186372"} Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.921677 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="80026c17ea7fafe2ec2f1472a7442c94cc9f1fddb99193cca12f70baca186372" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.921704 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-sr5tj-config-f7tk9" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.979178 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m5dg\" (UniqueName: \"kubernetes.io/projected/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-kube-api-access-6m5dg\") pod \"root-account-create-update-jwnth\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:50 crc kubenswrapper[4712]: I0130 17:14:50.979314 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-operator-scripts\") pod \"root-account-create-update-jwnth\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.081027 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m5dg\" (UniqueName: \"kubernetes.io/projected/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-kube-api-access-6m5dg\") pod \"root-account-create-update-jwnth\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.081113 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-operator-scripts\") pod \"root-account-create-update-jwnth\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.081938 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-operator-scripts\") pod \"root-account-create-update-jwnth\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.100602 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m5dg\" (UniqueName: \"kubernetes.io/projected/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-kube-api-access-6m5dg\") pod \"root-account-create-update-jwnth\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.155627 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.423929 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jwnth"] Jan 30 17:14:51 crc kubenswrapper[4712]: W0130 17:14:51.427705 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62bdcfb3_e69f_4cd5_bb4f_e201bfa993b8.slice/crio-8881e1ff9a575cefb2b4893c0115dccfcd2dbd2b3af5b6e685771db5c3d5944c WatchSource:0}: Error finding container 8881e1ff9a575cefb2b4893c0115dccfcd2dbd2b3af5b6e685771db5c3d5944c: Status 404 returned error can't find the container with id 8881e1ff9a575cefb2b4893c0115dccfcd2dbd2b3af5b6e685771db5c3d5944c Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.733201 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-sr5tj-config-f7tk9"] Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.741320 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-sr5tj-config-f7tk9"] Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.810727 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1" path="/var/lib/kubelet/pods/4f2eca0a-e459-4b1c-aacf-ca0035b9e8d1/volumes" Jan 30 17:14:51 crc kubenswrapper[4712]: I0130 17:14:51.930219 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jwnth" event={"ID":"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8","Type":"ContainerStarted","Data":"8881e1ff9a575cefb2b4893c0115dccfcd2dbd2b3af5b6e685771db5c3d5944c"} Jan 30 17:14:52 crc kubenswrapper[4712]: I0130 17:14:52.939095 4712 generic.go:334] "Generic (PLEG): container finished" podID="62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" containerID="203f921b777a111d157536b392e36f4480f132934bed3c262a83b3ae4fa5fbe2" exitCode=0 Jan 30 17:14:52 crc kubenswrapper[4712]: I0130 17:14:52.939493 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jwnth" event={"ID":"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8","Type":"ContainerDied","Data":"203f921b777a111d157536b392e36f4480f132934bed3c262a83b3ae4fa5fbe2"} Jan 30 17:14:52 crc kubenswrapper[4712]: I0130 17:14:52.946682 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"c979b96770f8e8b92437d563630f4e506b954d1db01b8d23a427f4b95627b7fb"} Jan 30 17:14:52 crc kubenswrapper[4712]: I0130 17:14:52.946737 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"6cc17828021d76b86c7a836f33ca518af8cf86c719dec5f882d31813678302fd"} Jan 30 17:14:52 crc kubenswrapper[4712]: I0130 17:14:52.946751 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"26d7de88f0fcddb95ede8e7b4a10804a1b1ff4e383dbb3b851f99805a7234a03"} Jan 30 17:14:53 crc kubenswrapper[4712]: I0130 17:14:53.958922 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"38802e35b4ac5ef7dc23d0618c68caf54d6f0812d71aa5d68e23b97645845029"} Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.256151 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.338943 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-operator-scripts\") pod \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.339038 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m5dg\" (UniqueName: \"kubernetes.io/projected/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-kube-api-access-6m5dg\") pod \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\" (UID: \"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8\") " Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.340065 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" (UID: "62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.345283 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-kube-api-access-6m5dg" (OuterVolumeSpecName: "kube-api-access-6m5dg") pod "62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" (UID: "62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8"). InnerVolumeSpecName "kube-api-access-6m5dg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.440902 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.440940 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m5dg\" (UniqueName: \"kubernetes.io/projected/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8-kube-api-access-6m5dg\") on node \"crc\" DevicePath \"\"" Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.968062 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jwnth" event={"ID":"62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8","Type":"ContainerDied","Data":"8881e1ff9a575cefb2b4893c0115dccfcd2dbd2b3af5b6e685771db5c3d5944c"} Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.968105 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8881e1ff9a575cefb2b4893c0115dccfcd2dbd2b3af5b6e685771db5c3d5944c" Jan 30 17:14:54 crc kubenswrapper[4712]: I0130 17:14:54.968189 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jwnth" Jan 30 17:14:55 crc kubenswrapper[4712]: I0130 17:14:55.822677 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 30 17:14:56 crc kubenswrapper[4712]: I0130 17:14:56.220102 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 17:14:57 crc kubenswrapper[4712]: I0130 17:14:57.372783 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jwnth"] Jan 30 17:14:57 crc kubenswrapper[4712]: I0130 17:14:57.384237 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jwnth"] Jan 30 17:14:57 crc kubenswrapper[4712]: I0130 17:14:57.812538 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" path="/var/lib/kubelet/pods/62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8/volumes" Jan 30 17:14:58 crc kubenswrapper[4712]: I0130 17:14:58.004574 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"188c0e584c31a4927ab50a82d9289b874924f9af4fbec28cf9c8559a9e4c2408"} Jan 30 17:14:58 crc kubenswrapper[4712]: I0130 17:14:58.004631 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"95feaa4299692ad6b985e07848a775624d9a8460360fa2cfc700f64c92312202"} Jan 30 17:14:59 crc kubenswrapper[4712]: I0130 17:14:59.032711 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"559c5baebb36766d7eff5d93d477fc41b6e46ac704f069ce16962ceeadbe188c"} Jan 30 17:14:59 crc kubenswrapper[4712]: I0130 17:14:59.033067 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"95d3d3b05cccfae9132501d2e3877920de5601a429b78ea759d21b63572676e4"} Jan 30 17:14:59 crc kubenswrapper[4712]: I0130 17:14:59.033081 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"8eacb27e286b16e75edfefe260bc4cc795d354d5c4c7d9eca59f02780fedcd30"} Jan 30 17:14:59 crc kubenswrapper[4712]: I0130 17:14:59.033092 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"7d77f6978e2a843091e85de462e58c04dbc95fccd9e5c25160bb90a45254b1d8"} Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.047280 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"b46c7f41-9ce5-4625-98d5-74bafa8bd0de","Type":"ContainerStarted","Data":"e541fcc5a8f2d2a63bb9ca761a50096db6611f7a20e683cbb9c718e7d45ff942"} Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.155005 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.397175709 podStartE2EDuration="49.15498118s" podCreationTimestamp="2026-01-30 17:14:11 +0000 UTC" firstStartedPulling="2026-01-30 17:14:45.616264762 +0000 UTC m=+1222.523274231" lastFinishedPulling="2026-01-30 17:14:57.374070233 +0000 UTC m=+1234.281079702" observedRunningTime="2026-01-30 17:15:00.114867695 +0000 UTC m=+1237.021877174" watchObservedRunningTime="2026-01-30 17:15:00.15498118 +0000 UTC m=+1237.061990649" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.156620 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd"] Jan 30 17:15:00 crc kubenswrapper[4712]: E0130 17:15:00.157089 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" containerName="mariadb-account-create-update" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.157109 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" containerName="mariadb-account-create-update" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.157344 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="62bdcfb3-e69f-4cd5-bb4f-e201bfa993b8" containerName="mariadb-account-create-update" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.158073 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.160946 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.161065 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.169627 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd"] Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.233650 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-config-volume\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.233754 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-secret-volume\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.233828 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ff2rq\" (UniqueName: \"kubernetes.io/projected/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-kube-api-access-ff2rq\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.335301 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-secret-volume\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.335386 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ff2rq\" (UniqueName: \"kubernetes.io/projected/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-kube-api-access-ff2rq\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.335409 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-config-volume\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.336392 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-config-volume\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.341748 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-secret-volume\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.357230 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ff2rq\" (UniqueName: \"kubernetes.io/projected/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-kube-api-access-ff2rq\") pod \"collect-profiles-29496555-8dxgd\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.453846 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4f797"] Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.455820 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.457624 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.481323 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4f797"] Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.530900 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.539078 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.539189 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.539281 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.539360 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55w8c\" (UniqueName: \"kubernetes.io/projected/44533c75-12d1-496a-88b4-1a0c38c3c336-kube-api-access-55w8c\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.539453 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-config\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.539569 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.641763 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-config\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.641864 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.641942 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.642008 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.642044 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.642076 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55w8c\" (UniqueName: \"kubernetes.io/projected/44533c75-12d1-496a-88b4-1a0c38c3c336-kube-api-access-55w8c\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.642937 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-sb\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.643402 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-svc\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.643490 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-config\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.644046 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-nb\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.644101 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-swift-storage-0\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.662545 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55w8c\" (UniqueName: \"kubernetes.io/projected/44533c75-12d1-496a-88b4-1a0c38c3c336-kube-api-access-55w8c\") pod \"dnsmasq-dns-764c5664d7-4f797\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:00 crc kubenswrapper[4712]: I0130 17:15:00.775258 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:01 crc kubenswrapper[4712]: I0130 17:15:01.068431 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd"] Jan 30 17:15:01 crc kubenswrapper[4712]: W0130 17:15:01.085215 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7b9ab52_8e89_454b_95d3_bd12c0f96ebb.slice/crio-bcab4b361bc9f49e40e6d9413633fb0ad45fccbc62b8f8b141eb93ee00757742 WatchSource:0}: Error finding container bcab4b361bc9f49e40e6d9413633fb0ad45fccbc62b8f8b141eb93ee00757742: Status 404 returned error can't find the container with id bcab4b361bc9f49e40e6d9413633fb0ad45fccbc62b8f8b141eb93ee00757742 Jan 30 17:15:01 crc kubenswrapper[4712]: I0130 17:15:01.255068 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4f797"] Jan 30 17:15:01 crc kubenswrapper[4712]: W0130 17:15:01.261555 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44533c75_12d1_496a_88b4_1a0c38c3c336.slice/crio-cdf01856d76c5d4803e59a8cc6e2ffb460de6e8b4f6c2b84dd8f10c1a108671f WatchSource:0}: Error finding container cdf01856d76c5d4803e59a8cc6e2ffb460de6e8b4f6c2b84dd8f10c1a108671f: Status 404 returned error can't find the container with id cdf01856d76c5d4803e59a8cc6e2ffb460de6e8b4f6c2b84dd8f10c1a108671f Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.073329 4712 generic.go:334] "Generic (PLEG): container finished" podID="f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" containerID="2a6e156fa9211e0d06ac89346f88b31d9df99adbdc9fe859db6c85e1c1eeb744" exitCode=0 Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.073372 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" event={"ID":"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb","Type":"ContainerDied","Data":"2a6e156fa9211e0d06ac89346f88b31d9df99adbdc9fe859db6c85e1c1eeb744"} Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.073836 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" event={"ID":"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb","Type":"ContainerStarted","Data":"bcab4b361bc9f49e40e6d9413633fb0ad45fccbc62b8f8b141eb93ee00757742"} Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.075434 4712 generic.go:334] "Generic (PLEG): container finished" podID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerID="847ef82149b77433a1890d5053dae355e37dc7f5354af07f0ab3f1137c6e5abf" exitCode=0 Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.075469 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4f797" event={"ID":"44533c75-12d1-496a-88b4-1a0c38c3c336","Type":"ContainerDied","Data":"847ef82149b77433a1890d5053dae355e37dc7f5354af07f0ab3f1137c6e5abf"} Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.075495 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4f797" event={"ID":"44533c75-12d1-496a-88b4-1a0c38c3c336","Type":"ContainerStarted","Data":"cdf01856d76c5d4803e59a8cc6e2ffb460de6e8b4f6c2b84dd8f10c1a108671f"} Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.356625 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-q59jh"] Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.357920 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.363039 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.367078 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q59jh"] Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.384159 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bxkg\" (UniqueName: \"kubernetes.io/projected/7426546b-0d60-4c6e-b888-c2293defc468-kube-api-access-6bxkg\") pod \"root-account-create-update-q59jh\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.384256 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7426546b-0d60-4c6e-b888-c2293defc468-operator-scripts\") pod \"root-account-create-update-q59jh\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.486329 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bxkg\" (UniqueName: \"kubernetes.io/projected/7426546b-0d60-4c6e-b888-c2293defc468-kube-api-access-6bxkg\") pod \"root-account-create-update-q59jh\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.486420 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7426546b-0d60-4c6e-b888-c2293defc468-operator-scripts\") pod \"root-account-create-update-q59jh\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.487218 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7426546b-0d60-4c6e-b888-c2293defc468-operator-scripts\") pod \"root-account-create-update-q59jh\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.511321 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bxkg\" (UniqueName: \"kubernetes.io/projected/7426546b-0d60-4c6e-b888-c2293defc468-kube-api-access-6bxkg\") pod \"root-account-create-update-q59jh\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:02 crc kubenswrapper[4712]: I0130 17:15:02.678338 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.088212 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4f797" event={"ID":"44533c75-12d1-496a-88b4-1a0c38c3c336","Type":"ContainerStarted","Data":"5d87721b9b2c7d805bc32383e44870b7e08530f081a070b1f9dce272e2b68a98"} Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.089175 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.121917 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-764c5664d7-4f797" podStartSLOduration=3.121894307 podStartE2EDuration="3.121894307s" podCreationTimestamp="2026-01-30 17:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:03.112253785 +0000 UTC m=+1240.019263254" watchObservedRunningTime="2026-01-30 17:15:03.121894307 +0000 UTC m=+1240.028903776" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.158130 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-q59jh"] Jan 30 17:15:03 crc kubenswrapper[4712]: W0130 17:15:03.166419 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7426546b_0d60_4c6e_b888_c2293defc468.slice/crio-d7a41d15ced1aba2d34e84a0bfde48d450892c5dc4fa786b39a2560a5c1aa205 WatchSource:0}: Error finding container d7a41d15ced1aba2d34e84a0bfde48d450892c5dc4fa786b39a2560a5c1aa205: Status 404 returned error can't find the container with id d7a41d15ced1aba2d34e84a0bfde48d450892c5dc4fa786b39a2560a5c1aa205 Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.349176 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.403611 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-config-volume\") pod \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.403871 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-secret-volume\") pod \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.403935 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ff2rq\" (UniqueName: \"kubernetes.io/projected/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-kube-api-access-ff2rq\") pod \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\" (UID: \"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb\") " Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.404770 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-config-volume" (OuterVolumeSpecName: "config-volume") pod "f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" (UID: "f7b9ab52-8e89-454b-95d3-bd12c0f96ebb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.409723 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" (UID: "f7b9ab52-8e89-454b-95d3-bd12c0f96ebb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.410861 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-kube-api-access-ff2rq" (OuterVolumeSpecName: "kube-api-access-ff2rq") pod "f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" (UID: "f7b9ab52-8e89-454b-95d3-bd12c0f96ebb"). InnerVolumeSpecName "kube-api-access-ff2rq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.506629 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.506896 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ff2rq\" (UniqueName: \"kubernetes.io/projected/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-kube-api-access-ff2rq\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:03 crc kubenswrapper[4712]: I0130 17:15:03.506913 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:04 crc kubenswrapper[4712]: I0130 17:15:04.096882 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q59jh" event={"ID":"7426546b-0d60-4c6e-b888-c2293defc468","Type":"ContainerStarted","Data":"7d9ebe2758317c93c69b5bc90b0d2af9f645bd05c2d96f112b2a027f42a6debc"} Jan 30 17:15:04 crc kubenswrapper[4712]: I0130 17:15:04.096933 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q59jh" event={"ID":"7426546b-0d60-4c6e-b888-c2293defc468","Type":"ContainerStarted","Data":"d7a41d15ced1aba2d34e84a0bfde48d450892c5dc4fa786b39a2560a5c1aa205"} Jan 30 17:15:04 crc kubenswrapper[4712]: I0130 17:15:04.101439 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" event={"ID":"f7b9ab52-8e89-454b-95d3-bd12c0f96ebb","Type":"ContainerDied","Data":"bcab4b361bc9f49e40e6d9413633fb0ad45fccbc62b8f8b141eb93ee00757742"} Jan 30 17:15:04 crc kubenswrapper[4712]: I0130 17:15:04.101516 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcab4b361bc9f49e40e6d9413633fb0ad45fccbc62b8f8b141eb93ee00757742" Jan 30 17:15:04 crc kubenswrapper[4712]: I0130 17:15:04.101767 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd" Jan 30 17:15:04 crc kubenswrapper[4712]: I0130 17:15:04.119771 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-q59jh" podStartSLOduration=2.119748214 podStartE2EDuration="2.119748214s" podCreationTimestamp="2026-01-30 17:15:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:04.11336999 +0000 UTC m=+1241.020379459" watchObservedRunningTime="2026-01-30 17:15:04.119748214 +0000 UTC m=+1241.026757683" Jan 30 17:15:05 crc kubenswrapper[4712]: I0130 17:15:05.110348 4712 generic.go:334] "Generic (PLEG): container finished" podID="7426546b-0d60-4c6e-b888-c2293defc468" containerID="7d9ebe2758317c93c69b5bc90b0d2af9f645bd05c2d96f112b2a027f42a6debc" exitCode=0 Jan 30 17:15:05 crc kubenswrapper[4712]: I0130 17:15:05.110754 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q59jh" event={"ID":"7426546b-0d60-4c6e-b888-c2293defc468","Type":"ContainerDied","Data":"7d9ebe2758317c93c69b5bc90b0d2af9f645bd05c2d96f112b2a027f42a6debc"} Jan 30 17:15:05 crc kubenswrapper[4712]: I0130 17:15:05.826317 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.200008 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-sdcjd"] Jan 30 17:15:06 crc kubenswrapper[4712]: E0130 17:15:06.200422 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" containerName="collect-profiles" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.200441 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" containerName="collect-profiles" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.200668 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" containerName="collect-profiles" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.201363 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.209138 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-sdcjd"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.222998 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.285378 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-w277s"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.292670 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.314076 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w277s"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.355109 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b36df6b0-4d60-47bd-a5e3-c8570fa81424-operator-scripts\") pod \"heat-db-create-sdcjd\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.355275 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnkdg\" (UniqueName: \"kubernetes.io/projected/b36df6b0-4d60-47bd-a5e3-c8570fa81424-kube-api-access-lnkdg\") pod \"heat-db-create-sdcjd\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.392211 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8597-account-create-update-dktjd"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.393360 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.395394 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.411413 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8597-account-create-update-dktjd"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.457858 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b36df6b0-4d60-47bd-a5e3-c8570fa81424-operator-scripts\") pod \"heat-db-create-sdcjd\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.457966 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnkdg\" (UniqueName: \"kubernetes.io/projected/b36df6b0-4d60-47bd-a5e3-c8570fa81424-kube-api-access-lnkdg\") pod \"heat-db-create-sdcjd\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.458011 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d0d92d-69bc-4285-98df-0f3bda502989-operator-scripts\") pod \"cinder-db-create-w277s\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.458052 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grg4n\" (UniqueName: \"kubernetes.io/projected/a6d0d92d-69bc-4285-98df-0f3bda502989-kube-api-access-grg4n\") pod \"cinder-db-create-w277s\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.458791 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b36df6b0-4d60-47bd-a5e3-c8570fa81424-operator-scripts\") pod \"heat-db-create-sdcjd\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.502658 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-zd4d8"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.503429 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnkdg\" (UniqueName: \"kubernetes.io/projected/b36df6b0-4d60-47bd-a5e3-c8570fa81424-kube-api-access-lnkdg\") pod \"heat-db-create-sdcjd\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.503885 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.514341 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zd4d8"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.524173 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.577279 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adf62bc-41cc-4682-8943-b72859412ebc-operator-scripts\") pod \"barbican-db-create-zd4d8\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.577345 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d0d92d-69bc-4285-98df-0f3bda502989-operator-scripts\") pod \"cinder-db-create-w277s\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.577364 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpqpl\" (UniqueName: \"kubernetes.io/projected/9adf62bc-41cc-4682-8943-b72859412ebc-kube-api-access-gpqpl\") pod \"barbican-db-create-zd4d8\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.577401 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grg4n\" (UniqueName: \"kubernetes.io/projected/a6d0d92d-69bc-4285-98df-0f3bda502989-kube-api-access-grg4n\") pod \"cinder-db-create-w277s\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.577419 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85c99\" (UniqueName: \"kubernetes.io/projected/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-kube-api-access-85c99\") pod \"barbican-8597-account-create-update-dktjd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.577467 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-operator-scripts\") pod \"barbican-8597-account-create-update-dktjd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.579259 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-6ee6-account-create-update-dl58q"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.580500 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.585049 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.588204 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d0d92d-69bc-4285-98df-0f3bda502989-operator-scripts\") pod \"cinder-db-create-w277s\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.629619 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ee6-account-create-update-dl58q"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.674550 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grg4n\" (UniqueName: \"kubernetes.io/projected/a6d0d92d-69bc-4285-98df-0f3bda502989-kube-api-access-grg4n\") pod \"cinder-db-create-w277s\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.678884 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-operator-scripts\") pod \"barbican-8597-account-create-update-dktjd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.678977 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adf62bc-41cc-4682-8943-b72859412ebc-operator-scripts\") pod \"barbican-db-create-zd4d8\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.679015 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01856653-57a6-4e16-810c-95e7cf57014f-operator-scripts\") pod \"cinder-6ee6-account-create-update-dl58q\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.679039 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gpqpl\" (UniqueName: \"kubernetes.io/projected/9adf62bc-41cc-4682-8943-b72859412ebc-kube-api-access-gpqpl\") pod \"barbican-db-create-zd4d8\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.679076 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85c99\" (UniqueName: \"kubernetes.io/projected/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-kube-api-access-85c99\") pod \"barbican-8597-account-create-update-dktjd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.679094 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcnsg\" (UniqueName: \"kubernetes.io/projected/01856653-57a6-4e16-810c-95e7cf57014f-kube-api-access-bcnsg\") pod \"cinder-6ee6-account-create-update-dl58q\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.679620 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-operator-scripts\") pod \"barbican-8597-account-create-update-dktjd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.679819 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adf62bc-41cc-4682-8943-b72859412ebc-operator-scripts\") pod \"barbican-db-create-zd4d8\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.713601 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-5890-account-create-update-n55qw"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.714558 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.716308 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85c99\" (UniqueName: \"kubernetes.io/projected/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-kube-api-access-85c99\") pod \"barbican-8597-account-create-update-dktjd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.724662 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.733166 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gpqpl\" (UniqueName: \"kubernetes.io/projected/9adf62bc-41cc-4682-8943-b72859412ebc-kube-api-access-gpqpl\") pod \"barbican-db-create-zd4d8\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.739969 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5890-account-create-update-n55qw"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.781384 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01856653-57a6-4e16-810c-95e7cf57014f-operator-scripts\") pod \"cinder-6ee6-account-create-update-dl58q\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.781462 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcnsg\" (UniqueName: \"kubernetes.io/projected/01856653-57a6-4e16-810c-95e7cf57014f-kube-api-access-bcnsg\") pod \"cinder-6ee6-account-create-update-dl58q\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.782784 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01856653-57a6-4e16-810c-95e7cf57014f-operator-scripts\") pod \"cinder-6ee6-account-create-update-dl58q\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.790884 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-97hwk"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.791848 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.799058 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.820403 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.820634 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.820649 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.820815 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dxmtz" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.824370 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcnsg\" (UniqueName: \"kubernetes.io/projected/01856653-57a6-4e16-810c-95e7cf57014f-kube-api-access-bcnsg\") pod \"cinder-6ee6-account-create-update-dl58q\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.839855 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-97hwk"] Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.861959 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.883790 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df96043-da07-44d6-bd5e-f90001f55f1f-operator-scripts\") pod \"heat-5890-account-create-update-n55qw\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.884173 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9whgj\" (UniqueName: \"kubernetes.io/projected/5df96043-da07-44d6-bd5e-f90001f55f1f-kube-api-access-9whgj\") pod \"heat-5890-account-create-update-n55qw\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.907730 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w277s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.932344 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-2rr2s"] Jan 30 17:15:06 crc kubenswrapper[4712]: E0130 17:15:06.932717 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7426546b-0d60-4c6e-b888-c2293defc468" containerName="mariadb-account-create-update" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.932729 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7426546b-0d60-4c6e-b888-c2293defc468" containerName="mariadb-account-create-update" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.932892 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="7426546b-0d60-4c6e-b888-c2293defc468" containerName="mariadb-account-create-update" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.933404 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.986676 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bxkg\" (UniqueName: \"kubernetes.io/projected/7426546b-0d60-4c6e-b888-c2293defc468-kube-api-access-6bxkg\") pod \"7426546b-0d60-4c6e-b888-c2293defc468\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.987573 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7426546b-0d60-4c6e-b888-c2293defc468-operator-scripts\") pod \"7426546b-0d60-4c6e-b888-c2293defc468\" (UID: \"7426546b-0d60-4c6e-b888-c2293defc468\") " Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.988077 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4p6g4\" (UniqueName: \"kubernetes.io/projected/7607c458-cbb6-43d4-8a85-e631507e9d66-kube-api-access-4p6g4\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.988241 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-config-data\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.988275 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-combined-ca-bundle\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.988527 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df96043-da07-44d6-bd5e-f90001f55f1f-operator-scripts\") pod \"heat-5890-account-create-update-n55qw\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.988585 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9whgj\" (UniqueName: \"kubernetes.io/projected/5df96043-da07-44d6-bd5e-f90001f55f1f-kube-api-access-9whgj\") pod \"heat-5890-account-create-update-n55qw\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.989926 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7426546b-0d60-4c6e-b888-c2293defc468-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7426546b-0d60-4c6e-b888-c2293defc468" (UID: "7426546b-0d60-4c6e-b888-c2293defc468"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.994220 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df96043-da07-44d6-bd5e-f90001f55f1f-operator-scripts\") pod \"heat-5890-account-create-update-n55qw\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.995417 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7426546b-0d60-4c6e-b888-c2293defc468-kube-api-access-6bxkg" (OuterVolumeSpecName: "kube-api-access-6bxkg") pod "7426546b-0d60-4c6e-b888-c2293defc468" (UID: "7426546b-0d60-4c6e-b888-c2293defc468"). InnerVolumeSpecName "kube-api-access-6bxkg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:06 crc kubenswrapper[4712]: I0130 17:15:06.996886 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2rr2s"] Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.015540 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.018870 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9whgj\" (UniqueName: \"kubernetes.io/projected/5df96043-da07-44d6-bd5e-f90001f55f1f-kube-api-access-9whgj\") pod \"heat-5890-account-create-update-n55qw\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.043126 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7b37-account-create-update-mdbpm"] Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.045609 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.050324 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096031 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-kube-api-access-vm2xf\") pod \"neutron-db-create-2rr2s\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096142 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-operator-scripts\") pod \"neutron-db-create-2rr2s\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096171 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4p6g4\" (UniqueName: \"kubernetes.io/projected/7607c458-cbb6-43d4-8a85-e631507e9d66-kube-api-access-4p6g4\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096257 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-config-data\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096287 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-combined-ca-bundle\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096384 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bxkg\" (UniqueName: \"kubernetes.io/projected/7426546b-0d60-4c6e-b888-c2293defc468-kube-api-access-6bxkg\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.096401 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7426546b-0d60-4c6e-b888-c2293defc468-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.107031 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.112014 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-config-data\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.114942 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-combined-ca-bundle\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.129854 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b37-account-create-update-mdbpm"] Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.135594 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.141097 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4p6g4\" (UniqueName: \"kubernetes.io/projected/7607c458-cbb6-43d4-8a85-e631507e9d66-kube-api-access-4p6g4\") pod \"keystone-db-sync-97hwk\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.161790 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.168587 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-q59jh" event={"ID":"7426546b-0d60-4c6e-b888-c2293defc468","Type":"ContainerDied","Data":"d7a41d15ced1aba2d34e84a0bfde48d450892c5dc4fa786b39a2560a5c1aa205"} Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.168633 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7a41d15ced1aba2d34e84a0bfde48d450892c5dc4fa786b39a2560a5c1aa205" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.168718 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-q59jh" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.197713 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325fa6b1-02e6-4ef7-aa98-99a417a5178b-operator-scripts\") pod \"neutron-7b37-account-create-update-mdbpm\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.197822 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-kube-api-access-vm2xf\") pod \"neutron-db-create-2rr2s\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.197891 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-operator-scripts\") pod \"neutron-db-create-2rr2s\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.197925 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjrwh\" (UniqueName: \"kubernetes.io/projected/325fa6b1-02e6-4ef7-aa98-99a417a5178b-kube-api-access-fjrwh\") pod \"neutron-7b37-account-create-update-mdbpm\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.198689 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-operator-scripts\") pod \"neutron-db-create-2rr2s\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.226573 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-kube-api-access-vm2xf\") pod \"neutron-db-create-2rr2s\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.265280 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.299360 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325fa6b1-02e6-4ef7-aa98-99a417a5178b-operator-scripts\") pod \"neutron-7b37-account-create-update-mdbpm\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.299470 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjrwh\" (UniqueName: \"kubernetes.io/projected/325fa6b1-02e6-4ef7-aa98-99a417a5178b-kube-api-access-fjrwh\") pod \"neutron-7b37-account-create-update-mdbpm\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.300583 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325fa6b1-02e6-4ef7-aa98-99a417a5178b-operator-scripts\") pod \"neutron-7b37-account-create-update-mdbpm\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:07 crc kubenswrapper[4712]: I0130 17:15:07.429583 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjrwh\" (UniqueName: \"kubernetes.io/projected/325fa6b1-02e6-4ef7-aa98-99a417a5178b-kube-api-access-fjrwh\") pod \"neutron-7b37-account-create-update-mdbpm\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:07.476001 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-sdcjd"] Jan 30 17:15:08 crc kubenswrapper[4712]: W0130 17:15:07.512961 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb36df6b0_4d60_47bd_a5e3_c8570fa81424.slice/crio-9da30fe215f15ae595a98eef4a61851d57907971a3835ec5574d706677b325d6 WatchSource:0}: Error finding container 9da30fe215f15ae595a98eef4a61851d57907971a3835ec5574d706677b325d6: Status 404 returned error can't find the container with id 9da30fe215f15ae595a98eef4a61851d57907971a3835ec5574d706677b325d6 Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:07.571832 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-w277s"] Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:07.656955 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-zd4d8"] Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:07.684315 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:07.696726 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8597-account-create-update-dktjd"] Jan 30 17:15:08 crc kubenswrapper[4712]: W0130 17:15:07.697572 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2bdbe95d_75db_4a6b_8204_0c9bdfc8f6bd.slice/crio-d2e2429f6332b725ae9dfbde56e04a990354ef1c9533ee0398d82356645a6ea0 WatchSource:0}: Error finding container d2e2429f6332b725ae9dfbde56e04a990354ef1c9533ee0398d82356645a6ea0: Status 404 returned error can't find the container with id d2e2429f6332b725ae9dfbde56e04a990354ef1c9533ee0398d82356645a6ea0 Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.182231 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-sdcjd" event={"ID":"b36df6b0-4d60-47bd-a5e3-c8570fa81424","Type":"ContainerStarted","Data":"ae1866255ee9d0c0b636a2048b70966260ae56843080505eabdd58b9aadc3b4d"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.182647 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-sdcjd" event={"ID":"b36df6b0-4d60-47bd-a5e3-c8570fa81424","Type":"ContainerStarted","Data":"9da30fe215f15ae595a98eef4a61851d57907971a3835ec5574d706677b325d6"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.187360 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zd4d8" event={"ID":"9adf62bc-41cc-4682-8943-b72859412ebc","Type":"ContainerStarted","Data":"96e758a07ebdd3575f4de3816c435903d2e17cf8c6ef90501540a3e5a431583f"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.187405 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zd4d8" event={"ID":"9adf62bc-41cc-4682-8943-b72859412ebc","Type":"ContainerStarted","Data":"da78b8290ecb7cfad371cab20cdd1ec956cf69351caeef2e1865c9e4738e25e7"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.197293 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w277s" event={"ID":"a6d0d92d-69bc-4285-98df-0f3bda502989","Type":"ContainerStarted","Data":"148a8ab19e12de20aeb4b7145ddbceda6f38491b4135f92fe2bd5f3a0553dd27"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.197362 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w277s" event={"ID":"a6d0d92d-69bc-4285-98df-0f3bda502989","Type":"ContainerStarted","Data":"9cf1e378fe941601a1e16807419439f7d10a7d77ddb0d5ac5ad1500cd4c198bd"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.200389 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8597-account-create-update-dktjd" event={"ID":"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd","Type":"ContainerStarted","Data":"0141c288682731d5610bf03db69cbd77b3ddfb2e6249b05475bb4061caf2b297"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.200427 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8597-account-create-update-dktjd" event={"ID":"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd","Type":"ContainerStarted","Data":"d2e2429f6332b725ae9dfbde56e04a990354ef1c9533ee0398d82356645a6ea0"} Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.216620 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-sdcjd" podStartSLOduration=2.216598002 podStartE2EDuration="2.216598002s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:08.211590382 +0000 UTC m=+1245.118599861" watchObservedRunningTime="2026-01-30 17:15:08.216598002 +0000 UTC m=+1245.123607481" Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.249704 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-w277s" podStartSLOduration=2.249681557 podStartE2EDuration="2.249681557s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:08.235532457 +0000 UTC m=+1245.142541926" watchObservedRunningTime="2026-01-30 17:15:08.249681557 +0000 UTC m=+1245.156691016" Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.277198 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8597-account-create-update-dktjd" podStartSLOduration=2.277176418 podStartE2EDuration="2.277176418s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:08.256052791 +0000 UTC m=+1245.163062260" watchObservedRunningTime="2026-01-30 17:15:08.277176418 +0000 UTC m=+1245.184185887" Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.759738 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-zd4d8" podStartSLOduration=2.759721677 podStartE2EDuration="2.759721677s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:08.291170775 +0000 UTC m=+1245.198180244" watchObservedRunningTime="2026-01-30 17:15:08.759721677 +0000 UTC m=+1245.666731146" Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.762864 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-6ee6-account-create-update-dl58q"] Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.854117 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-97hwk"] Jan 30 17:15:08 crc kubenswrapper[4712]: W0130 17:15:08.882412 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7607c458_cbb6_43d4_8a85_e631507e9d66.slice/crio-b321386af735d23a30f66ceaea7037f6c3e43b979cc823fd997e0ff7be0f3a8e WatchSource:0}: Error finding container b321386af735d23a30f66ceaea7037f6c3e43b979cc823fd997e0ff7be0f3a8e: Status 404 returned error can't find the container with id b321386af735d23a30f66ceaea7037f6c3e43b979cc823fd997e0ff7be0f3a8e Jan 30 17:15:08 crc kubenswrapper[4712]: I0130 17:15:08.906780 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-5890-account-create-update-n55qw"] Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.128595 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7b37-account-create-update-mdbpm"] Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.145900 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-2rr2s"] Jan 30 17:15:09 crc kubenswrapper[4712]: W0130 17:15:09.151788 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod325fa6b1_02e6_4ef7_aa98_99a417a5178b.slice/crio-b620bdd3498360f22b007e6cfcb83b1f133dd9de4e500ce4f4011b6cac2d6eb3 WatchSource:0}: Error finding container b620bdd3498360f22b007e6cfcb83b1f133dd9de4e500ce4f4011b6cac2d6eb3: Status 404 returned error can't find the container with id b620bdd3498360f22b007e6cfcb83b1f133dd9de4e500ce4f4011b6cac2d6eb3 Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.227994 4712 generic.go:334] "Generic (PLEG): container finished" podID="b36df6b0-4d60-47bd-a5e3-c8570fa81424" containerID="ae1866255ee9d0c0b636a2048b70966260ae56843080505eabdd58b9aadc3b4d" exitCode=0 Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.228051 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-sdcjd" event={"ID":"b36df6b0-4d60-47bd-a5e3-c8570fa81424","Type":"ContainerDied","Data":"ae1866255ee9d0c0b636a2048b70966260ae56843080505eabdd58b9aadc3b4d"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.244058 4712 generic.go:334] "Generic (PLEG): container finished" podID="9adf62bc-41cc-4682-8943-b72859412ebc" containerID="96e758a07ebdd3575f4de3816c435903d2e17cf8c6ef90501540a3e5a431583f" exitCode=0 Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.244153 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zd4d8" event={"ID":"9adf62bc-41cc-4682-8943-b72859412ebc","Type":"ContainerDied","Data":"96e758a07ebdd3575f4de3816c435903d2e17cf8c6ef90501540a3e5a431583f"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.285576 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ee6-account-create-update-dl58q" event={"ID":"01856653-57a6-4e16-810c-95e7cf57014f","Type":"ContainerStarted","Data":"8eafc63c8a5335f4ce9d4f66a0504f6ab4f3f6cc212fd11c269e2b35053ba8fd"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.299497 4712 generic.go:334] "Generic (PLEG): container finished" podID="a6d0d92d-69bc-4285-98df-0f3bda502989" containerID="148a8ab19e12de20aeb4b7145ddbceda6f38491b4135f92fe2bd5f3a0553dd27" exitCode=0 Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.299748 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w277s" event={"ID":"a6d0d92d-69bc-4285-98df-0f3bda502989","Type":"ContainerDied","Data":"148a8ab19e12de20aeb4b7145ddbceda6f38491b4135f92fe2bd5f3a0553dd27"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.302333 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5890-account-create-update-n55qw" event={"ID":"5df96043-da07-44d6-bd5e-f90001f55f1f","Type":"ContainerStarted","Data":"7d352e8816c9d25dd657758fb695a85f6766b97ce7dbefc98898af892280b1d2"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.353419 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b37-account-create-update-mdbpm" event={"ID":"325fa6b1-02e6-4ef7-aa98-99a417a5178b","Type":"ContainerStarted","Data":"b620bdd3498360f22b007e6cfcb83b1f133dd9de4e500ce4f4011b6cac2d6eb3"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.365719 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-97hwk" event={"ID":"7607c458-cbb6-43d4-8a85-e631507e9d66","Type":"ContainerStarted","Data":"b321386af735d23a30f66ceaea7037f6c3e43b979cc823fd997e0ff7be0f3a8e"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.373028 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rr2s" event={"ID":"dfc6cfd7-a3e2-4520-ac86-ff011cd96593","Type":"ContainerStarted","Data":"7d0f6673610170759547684a5bb94ab0ba21fe31573ccd70db789abc2bca0cfc"} Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.377299 4712 generic.go:334] "Generic (PLEG): container finished" podID="2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" containerID="0141c288682731d5610bf03db69cbd77b3ddfb2e6249b05475bb4061caf2b297" exitCode=0 Jan 30 17:15:09 crc kubenswrapper[4712]: I0130 17:15:09.377346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8597-account-create-update-dktjd" event={"ID":"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd","Type":"ContainerDied","Data":"0141c288682731d5610bf03db69cbd77b3ddfb2e6249b05475bb4061caf2b297"} Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.391724 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ee6-account-create-update-dl58q" event={"ID":"01856653-57a6-4e16-810c-95e7cf57014f","Type":"ContainerStarted","Data":"847e223ef2a3a759dc94f6e7d2c41e9a894c98a0bd770cef7059211c1dc282f6"} Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.402406 4712 generic.go:334] "Generic (PLEG): container finished" podID="dfc6cfd7-a3e2-4520-ac86-ff011cd96593" containerID="28e96750c07fbc8c01b200ea3e91c04442cd43e0ff95f8c5447ff55cb81419be" exitCode=0 Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.402484 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rr2s" event={"ID":"dfc6cfd7-a3e2-4520-ac86-ff011cd96593","Type":"ContainerDied","Data":"28e96750c07fbc8c01b200ea3e91c04442cd43e0ff95f8c5447ff55cb81419be"} Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.404255 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5890-account-create-update-n55qw" event={"ID":"5df96043-da07-44d6-bd5e-f90001f55f1f","Type":"ContainerStarted","Data":"a5d8fd67f1f0de8064669c7048566a7a4366f1e0b2e483d9191c923041b8fe19"} Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.416263 4712 generic.go:334] "Generic (PLEG): container finished" podID="a71905e7-0e29-40df-8d89-4a9a15cf0079" containerID="6827db413ce501836609062f68853422a888dfeffcce0c2fca3c7ec9cc0b9452" exitCode=0 Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.416364 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7v96g" event={"ID":"a71905e7-0e29-40df-8d89-4a9a15cf0079","Type":"ContainerDied","Data":"6827db413ce501836609062f68853422a888dfeffcce0c2fca3c7ec9cc0b9452"} Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.419990 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b37-account-create-update-mdbpm" event={"ID":"325fa6b1-02e6-4ef7-aa98-99a417a5178b","Type":"ContainerStarted","Data":"5358ca1f343981fb6b618413531a9590f90c9743b083d8eeac6cb4e9d1c4ccda"} Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.428955 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-6ee6-account-create-update-dl58q" podStartSLOduration=4.428931832 podStartE2EDuration="4.428931832s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:10.422859245 +0000 UTC m=+1247.329868714" watchObservedRunningTime="2026-01-30 17:15:10.428931832 +0000 UTC m=+1247.335941301" Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.453890 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7b37-account-create-update-mdbpm" podStartSLOduration=4.45387037 podStartE2EDuration="4.45387037s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:10.452970269 +0000 UTC m=+1247.359979748" watchObservedRunningTime="2026-01-30 17:15:10.45387037 +0000 UTC m=+1247.360879839" Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.484735 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-5890-account-create-update-n55qw" podStartSLOduration=4.484718292 podStartE2EDuration="4.484718292s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:10.480004179 +0000 UTC m=+1247.387013648" watchObservedRunningTime="2026-01-30 17:15:10.484718292 +0000 UTC m=+1247.391727761" Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.777984 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.857026 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nk4ll"] Jan 30 17:15:10 crc kubenswrapper[4712]: I0130 17:15:10.857702 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-nk4ll" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerName="dnsmasq-dns" containerID="cri-o://cd49cca49c962514975207ccdbc9e67b1400d099d5f63a47cf5027e0d9e4230c" gracePeriod=10 Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.137053 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.292050 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adf62bc-41cc-4682-8943-b72859412ebc-operator-scripts\") pod \"9adf62bc-41cc-4682-8943-b72859412ebc\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.292706 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gpqpl\" (UniqueName: \"kubernetes.io/projected/9adf62bc-41cc-4682-8943-b72859412ebc-kube-api-access-gpqpl\") pod \"9adf62bc-41cc-4682-8943-b72859412ebc\" (UID: \"9adf62bc-41cc-4682-8943-b72859412ebc\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.292532 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9adf62bc-41cc-4682-8943-b72859412ebc-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9adf62bc-41cc-4682-8943-b72859412ebc" (UID: "9adf62bc-41cc-4682-8943-b72859412ebc"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.299859 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9adf62bc-41cc-4682-8943-b72859412ebc-kube-api-access-gpqpl" (OuterVolumeSpecName: "kube-api-access-gpqpl") pod "9adf62bc-41cc-4682-8943-b72859412ebc" (UID: "9adf62bc-41cc-4682-8943-b72859412ebc"). InnerVolumeSpecName "kube-api-access-gpqpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.387668 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.388236 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.395166 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9adf62bc-41cc-4682-8943-b72859412ebc-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.395192 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gpqpl\" (UniqueName: \"kubernetes.io/projected/9adf62bc-41cc-4682-8943-b72859412ebc-kube-api-access-gpqpl\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.450399 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w277s" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.456750 4712 generic.go:334] "Generic (PLEG): container finished" podID="01856653-57a6-4e16-810c-95e7cf57014f" containerID="847e223ef2a3a759dc94f6e7d2c41e9a894c98a0bd770cef7059211c1dc282f6" exitCode=0 Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.456833 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ee6-account-create-update-dl58q" event={"ID":"01856653-57a6-4e16-810c-95e7cf57014f","Type":"ContainerDied","Data":"847e223ef2a3a759dc94f6e7d2c41e9a894c98a0bd770cef7059211c1dc282f6"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.473730 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-w277s" event={"ID":"a6d0d92d-69bc-4285-98df-0f3bda502989","Type":"ContainerDied","Data":"9cf1e378fe941601a1e16807419439f7d10a7d77ddb0d5ac5ad1500cd4c198bd"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.473765 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cf1e378fe941601a1e16807419439f7d10a7d77ddb0d5ac5ad1500cd4c198bd" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.473854 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-w277s" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.491281 4712 generic.go:334] "Generic (PLEG): container finished" podID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerID="cd49cca49c962514975207ccdbc9e67b1400d099d5f63a47cf5027e0d9e4230c" exitCode=0 Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.491350 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nk4ll" event={"ID":"b972b675-2edc-44ba-bc15-aa835aeef29d","Type":"ContainerDied","Data":"cd49cca49c962514975207ccdbc9e67b1400d099d5f63a47cf5027e0d9e4230c"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.495192 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.495927 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-operator-scripts\") pod \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.496070 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnkdg\" (UniqueName: \"kubernetes.io/projected/b36df6b0-4d60-47bd-a5e3-c8570fa81424-kube-api-access-lnkdg\") pod \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.496132 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85c99\" (UniqueName: \"kubernetes.io/projected/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-kube-api-access-85c99\") pod \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\" (UID: \"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.496188 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b36df6b0-4d60-47bd-a5e3-c8570fa81424-operator-scripts\") pod \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\" (UID: \"b36df6b0-4d60-47bd-a5e3-c8570fa81424\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.497012 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" (UID: "2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.497017 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b36df6b0-4d60-47bd-a5e3-c8570fa81424-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b36df6b0-4d60-47bd-a5e3-c8570fa81424" (UID: "b36df6b0-4d60-47bd-a5e3-c8570fa81424"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.502214 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-kube-api-access-85c99" (OuterVolumeSpecName: "kube-api-access-85c99") pod "2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" (UID: "2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd"). InnerVolumeSpecName "kube-api-access-85c99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.502476 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b36df6b0-4d60-47bd-a5e3-c8570fa81424-kube-api-access-lnkdg" (OuterVolumeSpecName: "kube-api-access-lnkdg") pod "b36df6b0-4d60-47bd-a5e3-c8570fa81424" (UID: "b36df6b0-4d60-47bd-a5e3-c8570fa81424"). InnerVolumeSpecName "kube-api-access-lnkdg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.504607 4712 generic.go:334] "Generic (PLEG): container finished" podID="5df96043-da07-44d6-bd5e-f90001f55f1f" containerID="a5d8fd67f1f0de8064669c7048566a7a4366f1e0b2e483d9191c923041b8fe19" exitCode=0 Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.504713 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5890-account-create-update-n55qw" event={"ID":"5df96043-da07-44d6-bd5e-f90001f55f1f","Type":"ContainerDied","Data":"a5d8fd67f1f0de8064669c7048566a7a4366f1e0b2e483d9191c923041b8fe19"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.516648 4712 generic.go:334] "Generic (PLEG): container finished" podID="325fa6b1-02e6-4ef7-aa98-99a417a5178b" containerID="5358ca1f343981fb6b618413531a9590f90c9743b083d8eeac6cb4e9d1c4ccda" exitCode=0 Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.516709 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b37-account-create-update-mdbpm" event={"ID":"325fa6b1-02e6-4ef7-aa98-99a417a5178b","Type":"ContainerDied","Data":"5358ca1f343981fb6b618413531a9590f90c9743b083d8eeac6cb4e9d1c4ccda"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.519425 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8597-account-create-update-dktjd" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.520126 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8597-account-create-update-dktjd" event={"ID":"2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd","Type":"ContainerDied","Data":"d2e2429f6332b725ae9dfbde56e04a990354ef1c9533ee0398d82356645a6ea0"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.520172 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2e2429f6332b725ae9dfbde56e04a990354ef1c9533ee0398d82356645a6ea0" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.559162 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-sdcjd" event={"ID":"b36df6b0-4d60-47bd-a5e3-c8570fa81424","Type":"ContainerDied","Data":"9da30fe215f15ae595a98eef4a61851d57907971a3835ec5574d706677b325d6"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.559211 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da30fe215f15ae595a98eef4a61851d57907971a3835ec5574d706677b325d6" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.559313 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-sdcjd" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.573654 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-zd4d8" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.577699 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-zd4d8" event={"ID":"9adf62bc-41cc-4682-8943-b72859412ebc","Type":"ContainerDied","Data":"da78b8290ecb7cfad371cab20cdd1ec956cf69351caeef2e1865c9e4738e25e7"} Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.577739 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da78b8290ecb7cfad371cab20cdd1ec956cf69351caeef2e1865c9e4738e25e7" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599664 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-config\") pod \"b972b675-2edc-44ba-bc15-aa835aeef29d\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599712 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d0d92d-69bc-4285-98df-0f3bda502989-operator-scripts\") pod \"a6d0d92d-69bc-4285-98df-0f3bda502989\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599778 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grg4n\" (UniqueName: \"kubernetes.io/projected/a6d0d92d-69bc-4285-98df-0f3bda502989-kube-api-access-grg4n\") pod \"a6d0d92d-69bc-4285-98df-0f3bda502989\" (UID: \"a6d0d92d-69bc-4285-98df-0f3bda502989\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599840 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-dns-svc\") pod \"b972b675-2edc-44ba-bc15-aa835aeef29d\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599869 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-nb\") pod \"b972b675-2edc-44ba-bc15-aa835aeef29d\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599899 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9496l\" (UniqueName: \"kubernetes.io/projected/b972b675-2edc-44ba-bc15-aa835aeef29d-kube-api-access-9496l\") pod \"b972b675-2edc-44ba-bc15-aa835aeef29d\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.599949 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-sb\") pod \"b972b675-2edc-44ba-bc15-aa835aeef29d\" (UID: \"b972b675-2edc-44ba-bc15-aa835aeef29d\") " Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.608985 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.614671 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6d0d92d-69bc-4285-98df-0f3bda502989-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a6d0d92d-69bc-4285-98df-0f3bda502989" (UID: "a6d0d92d-69bc-4285-98df-0f3bda502989"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.615172 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lnkdg\" (UniqueName: \"kubernetes.io/projected/b36df6b0-4d60-47bd-a5e3-c8570fa81424-kube-api-access-lnkdg\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.615233 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85c99\" (UniqueName: \"kubernetes.io/projected/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd-kube-api-access-85c99\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.615245 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b36df6b0-4d60-47bd-a5e3-c8570fa81424-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.661088 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b972b675-2edc-44ba-bc15-aa835aeef29d-kube-api-access-9496l" (OuterVolumeSpecName: "kube-api-access-9496l") pod "b972b675-2edc-44ba-bc15-aa835aeef29d" (UID: "b972b675-2edc-44ba-bc15-aa835aeef29d"). InnerVolumeSpecName "kube-api-access-9496l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.680567 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d0d92d-69bc-4285-98df-0f3bda502989-kube-api-access-grg4n" (OuterVolumeSpecName: "kube-api-access-grg4n") pod "a6d0d92d-69bc-4285-98df-0f3bda502989" (UID: "a6d0d92d-69bc-4285-98df-0f3bda502989"). InnerVolumeSpecName "kube-api-access-grg4n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.723009 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9496l\" (UniqueName: \"kubernetes.io/projected/b972b675-2edc-44ba-bc15-aa835aeef29d-kube-api-access-9496l\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.723034 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a6d0d92d-69bc-4285-98df-0f3bda502989-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.723043 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-grg4n\" (UniqueName: \"kubernetes.io/projected/a6d0d92d-69bc-4285-98df-0f3bda502989-kube-api-access-grg4n\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.727255 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b972b675-2edc-44ba-bc15-aa835aeef29d" (UID: "b972b675-2edc-44ba-bc15-aa835aeef29d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.729257 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-config" (OuterVolumeSpecName: "config") pod "b972b675-2edc-44ba-bc15-aa835aeef29d" (UID: "b972b675-2edc-44ba-bc15-aa835aeef29d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.749752 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b972b675-2edc-44ba-bc15-aa835aeef29d" (UID: "b972b675-2edc-44ba-bc15-aa835aeef29d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.752399 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b972b675-2edc-44ba-bc15-aa835aeef29d" (UID: "b972b675-2edc-44ba-bc15-aa835aeef29d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.826093 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.826119 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.826129 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.826137 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b972b675-2edc-44ba-bc15-aa835aeef29d-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:11 crc kubenswrapper[4712]: I0130 17:15:11.939284 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.033034 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-kube-api-access-vm2xf\") pod \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.033200 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-operator-scripts\") pod \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\" (UID: \"dfc6cfd7-a3e2-4520-ac86-ff011cd96593\") " Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.033703 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dfc6cfd7-a3e2-4520-ac86-ff011cd96593" (UID: "dfc6cfd7-a3e2-4520-ac86-ff011cd96593"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.040016 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-kube-api-access-vm2xf" (OuterVolumeSpecName: "kube-api-access-vm2xf") pod "dfc6cfd7-a3e2-4520-ac86-ff011cd96593" (UID: "dfc6cfd7-a3e2-4520-ac86-ff011cd96593"). InnerVolumeSpecName "kube-api-access-vm2xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.115516 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7v96g" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.135149 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.135189 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vm2xf\" (UniqueName: \"kubernetes.io/projected/dfc6cfd7-a3e2-4520-ac86-ff011cd96593-kube-api-access-vm2xf\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.236351 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-db-sync-config-data\") pod \"a71905e7-0e29-40df-8d89-4a9a15cf0079\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.236414 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-config-data\") pod \"a71905e7-0e29-40df-8d89-4a9a15cf0079\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.236477 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-combined-ca-bundle\") pod \"a71905e7-0e29-40df-8d89-4a9a15cf0079\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.236508 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vh6wc\" (UniqueName: \"kubernetes.io/projected/a71905e7-0e29-40df-8d89-4a9a15cf0079-kube-api-access-vh6wc\") pod \"a71905e7-0e29-40df-8d89-4a9a15cf0079\" (UID: \"a71905e7-0e29-40df-8d89-4a9a15cf0079\") " Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.240171 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a71905e7-0e29-40df-8d89-4a9a15cf0079-kube-api-access-vh6wc" (OuterVolumeSpecName: "kube-api-access-vh6wc") pod "a71905e7-0e29-40df-8d89-4a9a15cf0079" (UID: "a71905e7-0e29-40df-8d89-4a9a15cf0079"). InnerVolumeSpecName "kube-api-access-vh6wc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.244042 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "a71905e7-0e29-40df-8d89-4a9a15cf0079" (UID: "a71905e7-0e29-40df-8d89-4a9a15cf0079"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.268539 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a71905e7-0e29-40df-8d89-4a9a15cf0079" (UID: "a71905e7-0e29-40df-8d89-4a9a15cf0079"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.285594 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-config-data" (OuterVolumeSpecName: "config-data") pod "a71905e7-0e29-40df-8d89-4a9a15cf0079" (UID: "a71905e7-0e29-40df-8d89-4a9a15cf0079"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.339064 4712 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.339102 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.339113 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a71905e7-0e29-40df-8d89-4a9a15cf0079-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.339523 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vh6wc\" (UniqueName: \"kubernetes.io/projected/a71905e7-0e29-40df-8d89-4a9a15cf0079-kube-api-access-vh6wc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.583630 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-2rr2s" event={"ID":"dfc6cfd7-a3e2-4520-ac86-ff011cd96593","Type":"ContainerDied","Data":"7d0f6673610170759547684a5bb94ab0ba21fe31573ccd70db789abc2bca0cfc"} Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.583670 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d0f6673610170759547684a5bb94ab0ba21fe31573ccd70db789abc2bca0cfc" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.583771 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-2rr2s" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.587074 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-7v96g" event={"ID":"a71905e7-0e29-40df-8d89-4a9a15cf0079","Type":"ContainerDied","Data":"324a4db10e4615a3c78d5f41d46575e6e4df7ce8baf472e39b0cc6089a5524e4"} Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.587117 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="324a4db10e4615a3c78d5f41d46575e6e4df7ce8baf472e39b0cc6089a5524e4" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.587171 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-7v96g" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.594366 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-nk4ll" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.594613 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-nk4ll" event={"ID":"b972b675-2edc-44ba-bc15-aa835aeef29d","Type":"ContainerDied","Data":"43dcdb45b593e0a7efb4cc41da1d1364106f57a31f84f636cde638e001c70517"} Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.594654 4712 scope.go:117] "RemoveContainer" containerID="cd49cca49c962514975207ccdbc9e67b1400d099d5f63a47cf5027e0d9e4230c" Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.626202 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nk4ll"] Jan 30 17:15:12 crc kubenswrapper[4712]: I0130 17:15:12.637248 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-nk4ll"] Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.077618 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-99pvq"] Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078217 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b36df6b0-4d60-47bd-a5e3-c8570fa81424" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078229 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b36df6b0-4d60-47bd-a5e3-c8570fa81424" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078238 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d0d92d-69bc-4285-98df-0f3bda502989" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078244 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d0d92d-69bc-4285-98df-0f3bda502989" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078257 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerName="init" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078263 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerName="init" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078271 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" containerName="mariadb-account-create-update" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078277 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" containerName="mariadb-account-create-update" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078294 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerName="dnsmasq-dns" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078300 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerName="dnsmasq-dns" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078314 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a71905e7-0e29-40df-8d89-4a9a15cf0079" containerName="glance-db-sync" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078320 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a71905e7-0e29-40df-8d89-4a9a15cf0079" containerName="glance-db-sync" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078330 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9adf62bc-41cc-4682-8943-b72859412ebc" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078336 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9adf62bc-41cc-4682-8943-b72859412ebc" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: E0130 17:15:13.078345 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfc6cfd7-a3e2-4520-ac86-ff011cd96593" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078353 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfc6cfd7-a3e2-4520-ac86-ff011cd96593" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078497 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9adf62bc-41cc-4682-8943-b72859412ebc" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078512 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b36df6b0-4d60-47bd-a5e3-c8570fa81424" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078521 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d0d92d-69bc-4285-98df-0f3bda502989" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078530 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a71905e7-0e29-40df-8d89-4a9a15cf0079" containerName="glance-db-sync" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078538 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" containerName="mariadb-account-create-update" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078545 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfc6cfd7-a3e2-4520-ac86-ff011cd96593" containerName="mariadb-database-create" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.078556 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" containerName="dnsmasq-dns" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.079372 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.153688 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.153789 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.153852 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.153877 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.153914 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqvg8\" (UniqueName: \"kubernetes.io/projected/cd22ab3c-d638-45c9-b107-6c46494b1343-kube-api-access-xqvg8\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.153998 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-config\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.255195 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqvg8\" (UniqueName: \"kubernetes.io/projected/cd22ab3c-d638-45c9-b107-6c46494b1343-kube-api-access-xqvg8\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.255304 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-config\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.255340 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.255372 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.255411 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.255437 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.256388 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-swift-storage-0\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.256391 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-svc\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.257061 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.259342 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-config\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.259649 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-sb\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.271556 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-99pvq"] Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.284378 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqvg8\" (UniqueName: \"kubernetes.io/projected/cd22ab3c-d638-45c9-b107-6c46494b1343-kube-api-access-xqvg8\") pod \"dnsmasq-dns-74f6bcbc87-99pvq\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.433151 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:13 crc kubenswrapper[4712]: I0130 17:15:13.809916 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b972b675-2edc-44ba-bc15-aa835aeef29d" path="/var/lib/kubelet/pods/b972b675-2edc-44ba-bc15-aa835aeef29d/volumes" Jan 30 17:15:16 crc kubenswrapper[4712]: I0130 17:15:16.797172 4712 scope.go:117] "RemoveContainer" containerID="f20ab14196d9afc0b69831cf6e4bd5e2b276a9fcac3986b2ad49e7c9ae1f4113" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.051305 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.056107 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01856653-57a6-4e16-810c-95e7cf57014f-operator-scripts\") pod \"01856653-57a6-4e16-810c-95e7cf57014f\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.057410 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01856653-57a6-4e16-810c-95e7cf57014f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "01856653-57a6-4e16-810c-95e7cf57014f" (UID: "01856653-57a6-4e16-810c-95e7cf57014f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.057622 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.058049 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcnsg\" (UniqueName: \"kubernetes.io/projected/01856653-57a6-4e16-810c-95e7cf57014f-kube-api-access-bcnsg\") pod \"01856653-57a6-4e16-810c-95e7cf57014f\" (UID: \"01856653-57a6-4e16-810c-95e7cf57014f\") " Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.059159 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/01856653-57a6-4e16-810c-95e7cf57014f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.065761 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01856653-57a6-4e16-810c-95e7cf57014f-kube-api-access-bcnsg" (OuterVolumeSpecName: "kube-api-access-bcnsg") pod "01856653-57a6-4e16-810c-95e7cf57014f" (UID: "01856653-57a6-4e16-810c-95e7cf57014f"). InnerVolumeSpecName "kube-api-access-bcnsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.066780 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.160003 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjrwh\" (UniqueName: \"kubernetes.io/projected/325fa6b1-02e6-4ef7-aa98-99a417a5178b-kube-api-access-fjrwh\") pod \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.160365 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df96043-da07-44d6-bd5e-f90001f55f1f-operator-scripts\") pod \"5df96043-da07-44d6-bd5e-f90001f55f1f\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.160412 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9whgj\" (UniqueName: \"kubernetes.io/projected/5df96043-da07-44d6-bd5e-f90001f55f1f-kube-api-access-9whgj\") pod \"5df96043-da07-44d6-bd5e-f90001f55f1f\" (UID: \"5df96043-da07-44d6-bd5e-f90001f55f1f\") " Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.160434 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325fa6b1-02e6-4ef7-aa98-99a417a5178b-operator-scripts\") pod \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\" (UID: \"325fa6b1-02e6-4ef7-aa98-99a417a5178b\") " Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.160709 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcnsg\" (UniqueName: \"kubernetes.io/projected/01856653-57a6-4e16-810c-95e7cf57014f-kube-api-access-bcnsg\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.161258 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/325fa6b1-02e6-4ef7-aa98-99a417a5178b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "325fa6b1-02e6-4ef7-aa98-99a417a5178b" (UID: "325fa6b1-02e6-4ef7-aa98-99a417a5178b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.162186 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5df96043-da07-44d6-bd5e-f90001f55f1f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5df96043-da07-44d6-bd5e-f90001f55f1f" (UID: "5df96043-da07-44d6-bd5e-f90001f55f1f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.165400 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/325fa6b1-02e6-4ef7-aa98-99a417a5178b-kube-api-access-fjrwh" (OuterVolumeSpecName: "kube-api-access-fjrwh") pod "325fa6b1-02e6-4ef7-aa98-99a417a5178b" (UID: "325fa6b1-02e6-4ef7-aa98-99a417a5178b"). InnerVolumeSpecName "kube-api-access-fjrwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.167055 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5df96043-da07-44d6-bd5e-f90001f55f1f-kube-api-access-9whgj" (OuterVolumeSpecName: "kube-api-access-9whgj") pod "5df96043-da07-44d6-bd5e-f90001f55f1f" (UID: "5df96043-da07-44d6-bd5e-f90001f55f1f"). InnerVolumeSpecName "kube-api-access-9whgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.261878 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjrwh\" (UniqueName: \"kubernetes.io/projected/325fa6b1-02e6-4ef7-aa98-99a417a5178b-kube-api-access-fjrwh\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.261947 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5df96043-da07-44d6-bd5e-f90001f55f1f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.261973 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9whgj\" (UniqueName: \"kubernetes.io/projected/5df96043-da07-44d6-bd5e-f90001f55f1f-kube-api-access-9whgj\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.261985 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/325fa6b1-02e6-4ef7-aa98-99a417a5178b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.450526 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-99pvq"] Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.652291 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-97hwk" event={"ID":"7607c458-cbb6-43d4-8a85-e631507e9d66","Type":"ContainerStarted","Data":"60c9308c0ed62adc024fd48aef67555cce594bb2843f360247f55e39397db1b0"} Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.653882 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" event={"ID":"cd22ab3c-d638-45c9-b107-6c46494b1343","Type":"ContainerStarted","Data":"fdd983c4f9b1c3eecfb7d3092b3771e39699da6f6a0e41d60aa0ced66fb42179"} Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.653928 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" event={"ID":"cd22ab3c-d638-45c9-b107-6c46494b1343","Type":"ContainerStarted","Data":"7635793d9cb6c3e3527a0a8cea742fa89e935565ebfcf7e55a01e7df10b4f18d"} Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.658431 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-5890-account-create-update-n55qw" event={"ID":"5df96043-da07-44d6-bd5e-f90001f55f1f","Type":"ContainerDied","Data":"7d352e8816c9d25dd657758fb695a85f6766b97ce7dbefc98898af892280b1d2"} Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.658456 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d352e8816c9d25dd657758fb695a85f6766b97ce7dbefc98898af892280b1d2" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.659050 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-5890-account-create-update-n55qw" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.660261 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7b37-account-create-update-mdbpm" event={"ID":"325fa6b1-02e6-4ef7-aa98-99a417a5178b","Type":"ContainerDied","Data":"b620bdd3498360f22b007e6cfcb83b1f133dd9de4e500ce4f4011b6cac2d6eb3"} Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.660284 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b620bdd3498360f22b007e6cfcb83b1f133dd9de4e500ce4f4011b6cac2d6eb3" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.660344 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7b37-account-create-update-mdbpm" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.661969 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-6ee6-account-create-update-dl58q" event={"ID":"01856653-57a6-4e16-810c-95e7cf57014f","Type":"ContainerDied","Data":"8eafc63c8a5335f4ce9d4f66a0504f6ab4f3f6cc212fd11c269e2b35053ba8fd"} Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.661998 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8eafc63c8a5335f4ce9d4f66a0504f6ab4f3f6cc212fd11c269e2b35053ba8fd" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.662050 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-6ee6-account-create-update-dl58q" Jan 30 17:15:17 crc kubenswrapper[4712]: I0130 17:15:17.685541 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-97hwk" podStartSLOduration=3.687311274 podStartE2EDuration="11.685518303s" podCreationTimestamp="2026-01-30 17:15:06 +0000 UTC" firstStartedPulling="2026-01-30 17:15:08.888119914 +0000 UTC m=+1245.795129383" lastFinishedPulling="2026-01-30 17:15:16.886326943 +0000 UTC m=+1253.793336412" observedRunningTime="2026-01-30 17:15:17.675300127 +0000 UTC m=+1254.582309596" watchObservedRunningTime="2026-01-30 17:15:17.685518303 +0000 UTC m=+1254.592527762" Jan 30 17:15:18 crc kubenswrapper[4712]: I0130 17:15:18.676403 4712 generic.go:334] "Generic (PLEG): container finished" podID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerID="fdd983c4f9b1c3eecfb7d3092b3771e39699da6f6a0e41d60aa0ced66fb42179" exitCode=0 Jan 30 17:15:18 crc kubenswrapper[4712]: I0130 17:15:18.676592 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" event={"ID":"cd22ab3c-d638-45c9-b107-6c46494b1343","Type":"ContainerDied","Data":"fdd983c4f9b1c3eecfb7d3092b3771e39699da6f6a0e41d60aa0ced66fb42179"} Jan 30 17:15:19 crc kubenswrapper[4712]: I0130 17:15:19.687823 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" event={"ID":"cd22ab3c-d638-45c9-b107-6c46494b1343","Type":"ContainerStarted","Data":"f4760433c4d3e595ab4f1bbe427f619cf5766be16385fe803af0980ff8735001"} Jan 30 17:15:19 crc kubenswrapper[4712]: I0130 17:15:19.689013 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:19 crc kubenswrapper[4712]: I0130 17:15:19.714456 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podStartSLOduration=6.714436783 podStartE2EDuration="6.714436783s" podCreationTimestamp="2026-01-30 17:15:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:19.708067831 +0000 UTC m=+1256.615077300" watchObservedRunningTime="2026-01-30 17:15:19.714436783 +0000 UTC m=+1256.621446252" Jan 30 17:15:22 crc kubenswrapper[4712]: I0130 17:15:22.713110 4712 generic.go:334] "Generic (PLEG): container finished" podID="7607c458-cbb6-43d4-8a85-e631507e9d66" containerID="60c9308c0ed62adc024fd48aef67555cce594bb2843f360247f55e39397db1b0" exitCode=0 Jan 30 17:15:22 crc kubenswrapper[4712]: I0130 17:15:22.713180 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-97hwk" event={"ID":"7607c458-cbb6-43d4-8a85-e631507e9d66","Type":"ContainerDied","Data":"60c9308c0ed62adc024fd48aef67555cce594bb2843f360247f55e39397db1b0"} Jan 30 17:15:23 crc kubenswrapper[4712]: I0130 17:15:23.434979 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:23 crc kubenswrapper[4712]: I0130 17:15:23.487319 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4f797"] Jan 30 17:15:23 crc kubenswrapper[4712]: I0130 17:15:23.487587 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-764c5664d7-4f797" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerName="dnsmasq-dns" containerID="cri-o://5d87721b9b2c7d805bc32383e44870b7e08530f081a070b1f9dce272e2b68a98" gracePeriod=10 Jan 30 17:15:23 crc kubenswrapper[4712]: I0130 17:15:23.730107 4712 generic.go:334] "Generic (PLEG): container finished" podID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerID="5d87721b9b2c7d805bc32383e44870b7e08530f081a070b1f9dce272e2b68a98" exitCode=0 Jan 30 17:15:23 crc kubenswrapper[4712]: I0130 17:15:23.730289 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4f797" event={"ID":"44533c75-12d1-496a-88b4-1a0c38c3c336","Type":"ContainerDied","Data":"5d87721b9b2c7d805bc32383e44870b7e08530f081a070b1f9dce272e2b68a98"} Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.056856 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.149479 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.202088 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55w8c\" (UniqueName: \"kubernetes.io/projected/44533c75-12d1-496a-88b4-1a0c38c3c336-kube-api-access-55w8c\") pod \"44533c75-12d1-496a-88b4-1a0c38c3c336\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.202172 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-swift-storage-0\") pod \"44533c75-12d1-496a-88b4-1a0c38c3c336\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.202193 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-svc\") pod \"44533c75-12d1-496a-88b4-1a0c38c3c336\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.202336 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-nb\") pod \"44533c75-12d1-496a-88b4-1a0c38c3c336\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.202358 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-config\") pod \"44533c75-12d1-496a-88b4-1a0c38c3c336\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.202433 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-sb\") pod \"44533c75-12d1-496a-88b4-1a0c38c3c336\" (UID: \"44533c75-12d1-496a-88b4-1a0c38c3c336\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.207816 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44533c75-12d1-496a-88b4-1a0c38c3c336-kube-api-access-55w8c" (OuterVolumeSpecName: "kube-api-access-55w8c") pod "44533c75-12d1-496a-88b4-1a0c38c3c336" (UID: "44533c75-12d1-496a-88b4-1a0c38c3c336"). InnerVolumeSpecName "kube-api-access-55w8c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.248241 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "44533c75-12d1-496a-88b4-1a0c38c3c336" (UID: "44533c75-12d1-496a-88b4-1a0c38c3c336"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.248618 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "44533c75-12d1-496a-88b4-1a0c38c3c336" (UID: "44533c75-12d1-496a-88b4-1a0c38c3c336"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.251984 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-config" (OuterVolumeSpecName: "config") pod "44533c75-12d1-496a-88b4-1a0c38c3c336" (UID: "44533c75-12d1-496a-88b4-1a0c38c3c336"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.263477 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "44533c75-12d1-496a-88b4-1a0c38c3c336" (UID: "44533c75-12d1-496a-88b4-1a0c38c3c336"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.264367 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "44533c75-12d1-496a-88b4-1a0c38c3c336" (UID: "44533c75-12d1-496a-88b4-1a0c38c3c336"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.303844 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-config-data\") pod \"7607c458-cbb6-43d4-8a85-e631507e9d66\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.303932 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-combined-ca-bundle\") pod \"7607c458-cbb6-43d4-8a85-e631507e9d66\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.303988 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4p6g4\" (UniqueName: \"kubernetes.io/projected/7607c458-cbb6-43d4-8a85-e631507e9d66-kube-api-access-4p6g4\") pod \"7607c458-cbb6-43d4-8a85-e631507e9d66\" (UID: \"7607c458-cbb6-43d4-8a85-e631507e9d66\") " Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.304406 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.304433 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.304445 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.304457 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55w8c\" (UniqueName: \"kubernetes.io/projected/44533c75-12d1-496a-88b4-1a0c38c3c336-kube-api-access-55w8c\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.304472 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.304483 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/44533c75-12d1-496a-88b4-1a0c38c3c336-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.307738 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7607c458-cbb6-43d4-8a85-e631507e9d66-kube-api-access-4p6g4" (OuterVolumeSpecName: "kube-api-access-4p6g4") pod "7607c458-cbb6-43d4-8a85-e631507e9d66" (UID: "7607c458-cbb6-43d4-8a85-e631507e9d66"). InnerVolumeSpecName "kube-api-access-4p6g4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.324584 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7607c458-cbb6-43d4-8a85-e631507e9d66" (UID: "7607c458-cbb6-43d4-8a85-e631507e9d66"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.360611 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-config-data" (OuterVolumeSpecName: "config-data") pod "7607c458-cbb6-43d4-8a85-e631507e9d66" (UID: "7607c458-cbb6-43d4-8a85-e631507e9d66"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.409191 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.409228 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7607c458-cbb6-43d4-8a85-e631507e9d66-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.409262 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4p6g4\" (UniqueName: \"kubernetes.io/projected/7607c458-cbb6-43d4-8a85-e631507e9d66-kube-api-access-4p6g4\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.746084 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-97hwk" event={"ID":"7607c458-cbb6-43d4-8a85-e631507e9d66","Type":"ContainerDied","Data":"b321386af735d23a30f66ceaea7037f6c3e43b979cc823fd997e0ff7be0f3a8e"} Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.746122 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b321386af735d23a30f66ceaea7037f6c3e43b979cc823fd997e0ff7be0f3a8e" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.746182 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-97hwk" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.750249 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-764c5664d7-4f797" event={"ID":"44533c75-12d1-496a-88b4-1a0c38c3c336","Type":"ContainerDied","Data":"cdf01856d76c5d4803e59a8cc6e2ffb460de6e8b4f6c2b84dd8f10c1a108671f"} Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.750290 4712 scope.go:117] "RemoveContainer" containerID="5d87721b9b2c7d805bc32383e44870b7e08530f081a070b1f9dce272e2b68a98" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.750423 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-764c5664d7-4f797" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.789857 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4f797"] Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.792418 4712 scope.go:117] "RemoveContainer" containerID="847ef82149b77433a1890d5053dae355e37dc7f5354af07f0ab3f1137c6e5abf" Jan 30 17:15:24 crc kubenswrapper[4712]: I0130 17:15:24.797406 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-764c5664d7-4f797"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.032477 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4ktn9"] Jan 30 17:15:25 crc kubenswrapper[4712]: E0130 17:15:25.033079 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7607c458-cbb6-43d4-8a85-e631507e9d66" containerName="keystone-db-sync" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033095 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7607c458-cbb6-43d4-8a85-e631507e9d66" containerName="keystone-db-sync" Jan 30 17:15:25 crc kubenswrapper[4712]: E0130 17:15:25.033107 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerName="init" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033113 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerName="init" Jan 30 17:15:25 crc kubenswrapper[4712]: E0130 17:15:25.033129 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01856653-57a6-4e16-810c-95e7cf57014f" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033135 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="01856653-57a6-4e16-810c-95e7cf57014f" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: E0130 17:15:25.033147 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5df96043-da07-44d6-bd5e-f90001f55f1f" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033153 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5df96043-da07-44d6-bd5e-f90001f55f1f" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: E0130 17:15:25.033173 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="325fa6b1-02e6-4ef7-aa98-99a417a5178b" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033181 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="325fa6b1-02e6-4ef7-aa98-99a417a5178b" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: E0130 17:15:25.033195 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerName="dnsmasq-dns" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033200 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerName="dnsmasq-dns" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033335 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" containerName="dnsmasq-dns" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033343 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5df96043-da07-44d6-bd5e-f90001f55f1f" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033355 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="01856653-57a6-4e16-810c-95e7cf57014f" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033374 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="325fa6b1-02e6-4ef7-aa98-99a417a5178b" containerName="mariadb-account-create-update" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.033381 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="7607c458-cbb6-43d4-8a85-e631507e9d66" containerName="keystone-db-sync" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.034155 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.039288 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-fkk2v"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.040282 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.043683 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.043887 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.044009 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.044119 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dxmtz" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.046949 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4ktn9"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.051144 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.120140 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fkk2v"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221002 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221039 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-combined-ca-bundle\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221081 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221268 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5wp8\" (UniqueName: \"kubernetes.io/projected/89d325b5-bb94-4295-a169-465b4b0b73be-kube-api-access-g5wp8\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221320 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-credential-keys\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221363 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221393 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znqmp\" (UniqueName: \"kubernetes.io/projected/cdd10608-b72c-4025-a140-2934ba8bc27c-kube-api-access-znqmp\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221417 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-fernet-keys\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221436 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221536 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-config\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221575 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-scripts\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.221621 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-config-data\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.240404 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-9gcv2"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.241565 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.246748 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-stb44" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.247091 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.278911 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9gcv2"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.322922 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.322960 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-combined-ca-bundle\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323012 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323046 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5wp8\" (UniqueName: \"kubernetes.io/projected/89d325b5-bb94-4295-a169-465b4b0b73be-kube-api-access-g5wp8\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323063 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-credential-keys\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323080 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323098 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znqmp\" (UniqueName: \"kubernetes.io/projected/cdd10608-b72c-4025-a140-2934ba8bc27c-kube-api-access-znqmp\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323112 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-fernet-keys\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323124 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323158 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-config\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323179 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-scripts\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.323197 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-config-data\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.324767 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-sb\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.325359 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.325496 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-config\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.326590 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-svc\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.327568 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.333707 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-config-data\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.333970 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-combined-ca-bundle\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.335687 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-78bf8d4bc-dzt7l"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.337741 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-fernet-keys\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.339017 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-credential-keys\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.343251 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.344358 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-scripts\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.354522 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78bf8d4bc-dzt7l"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.361464 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-kqzm8" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.361825 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.362024 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.362229 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.393239 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znqmp\" (UniqueName: \"kubernetes.io/projected/cdd10608-b72c-4025-a140-2934ba8bc27c-kube-api-access-znqmp\") pod \"dnsmasq-dns-847c4cc679-4ktn9\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.400496 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5wp8\" (UniqueName: \"kubernetes.io/projected/89d325b5-bb94-4295-a169-465b4b0b73be-kube-api-access-g5wp8\") pod \"keystone-bootstrap-fkk2v\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.407149 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.428187 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.430893 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-combined-ca-bundle\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.430974 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-config-data\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.431077 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hg828\" (UniqueName: \"kubernetes.io/projected/3c24ed25-f06f-494d-9fd5-2077c052db31-kube-api-access-hg828\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.500874 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-ldhgd"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.502126 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.523960 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.526053 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.535445 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.536370 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537058 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d98cb77-f784-431c-bd65-35261f546cd0-logs\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537523 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfwv6\" (UniqueName: \"kubernetes.io/projected/9d98cb77-f784-431c-bd65-35261f546cd0-kube-api-access-pfwv6\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537634 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-scripts\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537709 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hg828\" (UniqueName: \"kubernetes.io/projected/3c24ed25-f06f-494d-9fd5-2077c052db31-kube-api-access-hg828\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537855 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-config-data\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537957 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-combined-ca-bundle\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.538044 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-config-data\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.538432 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d98cb77-f784-431c-bd65-35261f546cd0-horizon-secret-key\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537438 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ldhgd"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.537306 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.545313 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.545596 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bld2f" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.548810 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-config-data\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.550749 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-combined-ca-bundle\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.591372 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-78jqx"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.592630 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.609977 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.610317 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-d7tcp" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.610508 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.637812 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hg828\" (UniqueName: \"kubernetes.io/projected/3c24ed25-f06f-494d-9fd5-2077c052db31-kube-api-access-hg828\") pod \"heat-db-sync-9gcv2\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645165 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d98cb77-f784-431c-bd65-35261f546cd0-horizon-secret-key\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645232 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62p8f\" (UniqueName: \"kubernetes.io/projected/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-kube-api-access-62p8f\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645259 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d98cb77-f784-431c-bd65-35261f546cd0-logs\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645288 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-run-httpd\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645312 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfwv6\" (UniqueName: \"kubernetes.io/projected/9d98cb77-f784-431c-bd65-35261f546cd0-kube-api-access-pfwv6\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645343 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-scripts\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645371 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-combined-ca-bundle\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645398 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-config\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645419 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645442 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-config-data\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645470 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk2g8\" (UniqueName: \"kubernetes.io/projected/67221ffc-37c6-458b-b4b4-26ef6e628c0b-kube-api-access-nk2g8\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645499 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-log-httpd\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645530 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-config-data\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645552 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.645598 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-scripts\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.647106 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d98cb77-f784-431c-bd65-35261f546cd0-logs\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.647869 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-config-data\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.647914 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-scripts\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.647947 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.703533 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d98cb77-f784-431c-bd65-35261f546cd0-horizon-secret-key\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.721952 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-78jqx"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.727404 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfwv6\" (UniqueName: \"kubernetes.io/projected/9d98cb77-f784-431c-bd65-35261f546cd0-kube-api-access-pfwv6\") pod \"horizon-78bf8d4bc-dzt7l\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.766088 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62p8f\" (UniqueName: \"kubernetes.io/projected/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-kube-api-access-62p8f\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.766972 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzf52\" (UniqueName: \"kubernetes.io/projected/2ef9729d-cbbc-4354-98e4-a9e07651518e-kube-api-access-jzf52\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.767107 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-combined-ca-bundle\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.824220 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-run-httpd\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.824310 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-scripts\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.829949 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-combined-ca-bundle\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830049 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-config\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830090 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830137 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-config-data\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830193 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk2g8\" (UniqueName: \"kubernetes.io/projected/67221ffc-37c6-458b-b4b4-26ef6e628c0b-kube-api-access-nk2g8\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830236 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-log-httpd\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830287 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830341 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef9729d-cbbc-4354-98e4-a9e07651518e-etc-machine-id\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830398 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-db-sync-config-data\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830452 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-scripts\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.830548 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-config-data\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.848451 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-run-httpd\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.849442 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-combined-ca-bundle\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.859142 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-log-httpd\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.871953 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.879234 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9gcv2" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.879788 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.939440 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.940240 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-scripts\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.941503 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk2g8\" (UniqueName: \"kubernetes.io/projected/67221ffc-37c6-458b-b4b4-26ef6e628c0b-kube-api-access-nk2g8\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.941696 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62p8f\" (UniqueName: \"kubernetes.io/projected/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-kube-api-access-62p8f\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.941981 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-config-data\") pod \"ceilometer-0\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " pod="openstack/ceilometer-0" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.949232 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-config\") pod \"neutron-db-sync-ldhgd\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960596 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef9729d-cbbc-4354-98e4-a9e07651518e-etc-machine-id\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960645 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-db-sync-config-data\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960672 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-config-data\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960731 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzf52\" (UniqueName: \"kubernetes.io/projected/2ef9729d-cbbc-4354-98e4-a9e07651518e-kube-api-access-jzf52\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960732 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef9729d-cbbc-4354-98e4-a9e07651518e-etc-machine-id\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960760 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-combined-ca-bundle\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.960823 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-scripts\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.979952 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-config-data\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.981101 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44533c75-12d1-496a-88b4-1a0c38c3c336" path="/var/lib/kubelet/pods/44533c75-12d1-496a-88b4-1a0c38c3c336/volumes" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.981750 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-kmcjp"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.990150 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kmcjp"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.990193 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4ktn9"] Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.990280 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.994845 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-scripts\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.997315 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.997539 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 17:15:25 crc kubenswrapper[4712]: I0130 17:15:25.997738 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fsdvc" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.002428 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-combined-ca-bundle\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.003275 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-db-sync-config-data\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.010479 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzf52\" (UniqueName: \"kubernetes.io/projected/2ef9729d-cbbc-4354-98e4-a9e07651518e-kube-api-access-jzf52\") pod \"cinder-db-sync-78jqx\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.013932 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qkp54"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.015255 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.086168 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-7krdw"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.087634 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.090414 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.104548 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nhqcz" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.109243 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qkp54"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.123925 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7krdw"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.138856 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.150738 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.154272 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.154673 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-q5zp4" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.154962 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.155187 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.160398 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.182898 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.192363 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-config\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.192594 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.192752 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-logs\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.192943 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-combined-ca-bundle\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.193066 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.193190 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmkh8\" (UniqueName: \"kubernetes.io/projected/d421f208-6974-48b9-9d8d-abe468e07c18-kube-api-access-tmkh8\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.193372 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-scripts\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.193876 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.193983 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.194017 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-config-data\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.194106 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkvtw\" (UniqueName: \"kubernetes.io/projected/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-kube-api-access-mkvtw\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.203858 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.212308 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-5c996b5c45-mvl64"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.216596 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.242092 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c996b5c45-mvl64"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.247954 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-78jqx" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296570 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmkh8\" (UniqueName: \"kubernetes.io/projected/d421f208-6974-48b9-9d8d-abe468e07c18-kube-api-access-tmkh8\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296626 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b123ecaa-e5d2-4daf-b377-07056dd21f37-horizon-secret-key\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296656 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9xzd\" (UniqueName: \"kubernetes.io/projected/fb119504-1453-45ce-9a6a-65df12e3e9f8-kube-api-access-d9xzd\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296684 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-config-data\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296703 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxbn4\" (UniqueName: \"kubernetes.io/projected/b123ecaa-e5d2-4daf-b377-07056dd21f37-kube-api-access-qxbn4\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296717 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296736 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-scripts\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296762 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-scripts\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296812 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296829 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296847 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296863 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296896 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296929 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-config-data\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296957 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-logs\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.296975 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-combined-ca-bundle\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297003 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mkvtw\" (UniqueName: \"kubernetes.io/projected/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-kube-api-access-mkvtw\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297026 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297047 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297064 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-config\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297083 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b123ecaa-e5d2-4daf-b377-07056dd21f37-logs\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297103 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jjvf\" (UniqueName: \"kubernetes.io/projected/6c4a03a4-e80d-4605-990f-a242222558bb-kube-api-access-7jjvf\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297119 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297138 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-logs\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297157 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-combined-ca-bundle\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297181 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.297198 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-db-sync-config-data\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.299809 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-svc\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.299964 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-nb\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.301447 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-config\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.302084 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-sb\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.302357 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-logs\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.307394 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-swift-storage-0\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.319987 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-combined-ca-bundle\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.325375 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-config-data\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.329922 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-scripts\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.345953 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmkh8\" (UniqueName: \"kubernetes.io/projected/d421f208-6974-48b9-9d8d-abe468e07c18-kube-api-access-tmkh8\") pod \"dnsmasq-dns-785d8bcb8c-qkp54\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.357654 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mkvtw\" (UniqueName: \"kubernetes.io/projected/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-kube-api-access-mkvtw\") pod \"placement-db-sync-kmcjp\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403383 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jjvf\" (UniqueName: \"kubernetes.io/projected/6c4a03a4-e80d-4605-990f-a242222558bb-kube-api-access-7jjvf\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403422 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403493 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-db-sync-config-data\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403537 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b123ecaa-e5d2-4daf-b377-07056dd21f37-horizon-secret-key\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403554 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9xzd\" (UniqueName: \"kubernetes.io/projected/fb119504-1453-45ce-9a6a-65df12e3e9f8-kube-api-access-d9xzd\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403589 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-config-data\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403604 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxbn4\" (UniqueName: \"kubernetes.io/projected/b123ecaa-e5d2-4daf-b377-07056dd21f37-kube-api-access-qxbn4\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403618 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403677 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-scripts\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403701 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403721 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403742 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403884 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-logs\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403904 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-combined-ca-bundle\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403957 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.403996 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b123ecaa-e5d2-4daf-b377-07056dd21f37-logs\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.404331 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b123ecaa-e5d2-4daf-b377-07056dd21f37-logs\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.407743 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.409351 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-scripts\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.410431 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-config-data\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.410966 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.414395 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-logs\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.414679 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-db-sync-config-data\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.420848 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.424749 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-scripts\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.430959 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.431383 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kmcjp" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.432457 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-config-data\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.443409 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jjvf\" (UniqueName: \"kubernetes.io/projected/6c4a03a4-e80d-4605-990f-a242222558bb-kube-api-access-7jjvf\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.454471 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b123ecaa-e5d2-4daf-b377-07056dd21f37-horizon-secret-key\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.461953 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9xzd\" (UniqueName: \"kubernetes.io/projected/fb119504-1453-45ce-9a6a-65df12e3e9f8-kube-api-access-d9xzd\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.471734 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-combined-ca-bundle\") pod \"barbican-db-sync-7krdw\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.472135 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.481339 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxbn4\" (UniqueName: \"kubernetes.io/projected/b123ecaa-e5d2-4daf-b377-07056dd21f37-kube-api-access-qxbn4\") pod \"horizon-5c996b5c45-mvl64\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.495322 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.496833 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.499607 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7krdw" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.502730 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.502777 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.526040 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.534555 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4ktn9"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.559922 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.621183 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625327 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625465 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625542 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625618 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625764 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625856 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.625942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.626035 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd29c\" (UniqueName: \"kubernetes.io/projected/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-kube-api-access-xd29c\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727245 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727297 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727334 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd29c\" (UniqueName: \"kubernetes.io/projected/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-kube-api-access-xd29c\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727394 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727432 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727461 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727492 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.727564 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.729127 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-logs\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.734105 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.734443 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.748789 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.750596 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-config-data\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.760472 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd29c\" (UniqueName: \"kubernetes.io/projected/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-kube-api-access-xd29c\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.762684 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.775452 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-scripts\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.813315 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.873578 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.908081 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" event={"ID":"cdd10608-b72c-4025-a140-2934ba8bc27c","Type":"ContainerStarted","Data":"6abc1956fd2023907622dbbadef5adf953e4725969de29358470ec12870c25ba"} Jan 30 17:15:26 crc kubenswrapper[4712]: I0130 17:15:26.938716 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-fkk2v"] Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.166204 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.186029 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-9gcv2"] Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.210538 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-78bf8d4bc-dzt7l"] Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.461315 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-ldhgd"] Jan 30 17:15:27 crc kubenswrapper[4712]: W0130 17:15:27.480246 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67221ffc_37c6_458b_b4b4_26ef6e628c0b.slice/crio-b0d2e9a2cb007779681efc1143ad174629fa5c75d0466a1989bf68319678383a WatchSource:0}: Error finding container b0d2e9a2cb007779681efc1143ad174629fa5c75d0466a1989bf68319678383a: Status 404 returned error can't find the container with id b0d2e9a2cb007779681efc1143ad174629fa5c75d0466a1989bf68319678383a Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.817617 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-78jqx"] Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.911197 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.928448 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-kmcjp"] Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.937535 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9gcv2" event={"ID":"3c24ed25-f06f-494d-9fd5-2077c052db31","Type":"ContainerStarted","Data":"95c75feffa0b4af4dfb52e8b01ffea788aebd92ddeda2a530b5206f5a299d645"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.939184 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-78jqx" event={"ID":"2ef9729d-cbbc-4354-98e4-a9e07651518e","Type":"ContainerStarted","Data":"1a03036353bdf44f64cf0d2100bf8fe4b0197d3e8e8381e564311abc9da9aafa"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.940399 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkk2v" event={"ID":"89d325b5-bb94-4295-a169-465b4b0b73be","Type":"ContainerStarted","Data":"6b1338b18a4ad5f3e9405fd9439035f013b0591a978c5fddbbb84b304a0b47e1"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.940425 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkk2v" event={"ID":"89d325b5-bb94-4295-a169-465b4b0b73be","Type":"ContainerStarted","Data":"9cca286032902c048b876a5605837eadd931223fb3ba3724d86ecfe75b7e7dde"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.955387 4712 generic.go:334] "Generic (PLEG): container finished" podID="cdd10608-b72c-4025-a140-2934ba8bc27c" containerID="bf9bba9b7032603b446f7797e645566a6279b45ae3384bf937b6f7767232133c" exitCode=0 Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.955502 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" event={"ID":"cdd10608-b72c-4025-a140-2934ba8bc27c","Type":"ContainerDied","Data":"bf9bba9b7032603b446f7797e645566a6279b45ae3384bf937b6f7767232133c"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.959447 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3896ac30-4d2d-4bc2-bfc3-4352d7d586de","Type":"ContainerStarted","Data":"f87271a1a9744e3e125a9c67363dc26d375e70eed36f1643b82f7545ef74d1a4"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.976702 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78bf8d4bc-dzt7l" event={"ID":"9d98cb77-f784-431c-bd65-35261f546cd0","Type":"ContainerStarted","Data":"bbae3b33176941b0a17f937aa1f7e9f9ff5f53be805ee0404d21e4d176e80780"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.982427 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ldhgd" event={"ID":"67221ffc-37c6-458b-b4b4-26ef6e628c0b","Type":"ContainerStarted","Data":"aff21b13d905c3dcc1d105927345076671d1cf6986b7a1c1afe3b22e3961b9e2"} Jan 30 17:15:27 crc kubenswrapper[4712]: I0130 17:15:27.982458 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ldhgd" event={"ID":"67221ffc-37c6-458b-b4b4-26ef6e628c0b","Type":"ContainerStarted","Data":"b0d2e9a2cb007779681efc1143ad174629fa5c75d0466a1989bf68319678383a"} Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.032147 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-fkk2v" podStartSLOduration=4.032129931 podStartE2EDuration="4.032129931s" podCreationTimestamp="2026-01-30 17:15:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:27.97635016 +0000 UTC m=+1264.883359629" watchObservedRunningTime="2026-01-30 17:15:28.032129931 +0000 UTC m=+1264.939139400" Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.051622 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-ldhgd" podStartSLOduration=3.051601399 podStartE2EDuration="3.051601399s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:28.019998119 +0000 UTC m=+1264.927007578" watchObservedRunningTime="2026-01-30 17:15:28.051601399 +0000 UTC m=+1264.958610868" Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.123884 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-5c996b5c45-mvl64"] Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.139116 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qkp54"] Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.164471 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-7krdw"] Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.282781 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.383174 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.826722 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.937539 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.937623 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-svc\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.937645 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-znqmp\" (UniqueName: \"kubernetes.io/projected/cdd10608-b72c-4025-a140-2934ba8bc27c-kube-api-access-znqmp\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.937677 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.937780 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-sb\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:28 crc kubenswrapper[4712]: I0130 17:15:28.937906 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-config\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.000486 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdd10608-b72c-4025-a140-2934ba8bc27c-kube-api-access-znqmp" (OuterVolumeSpecName: "kube-api-access-znqmp") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "kube-api-access-znqmp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.012059 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb119504-1453-45ce-9a6a-65df12e3e9f8","Type":"ContainerStarted","Data":"f7507ae454c4439b2e43b54ee34ec531fb7beff71fef3b8d456f5715539ee6ef"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.016956 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kmcjp" event={"ID":"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c","Type":"ContainerStarted","Data":"122970a8f49c277106e2d38de8082f91a67f0ca52f3f0c91f6a6e1861a371c5d"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.020736 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" event={"ID":"cdd10608-b72c-4025-a140-2934ba8bc27c","Type":"ContainerDied","Data":"6abc1956fd2023907622dbbadef5adf953e4725969de29358470ec12870c25ba"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.020782 4712 scope.go:117] "RemoveContainer" containerID="bf9bba9b7032603b446f7797e645566a6279b45ae3384bf937b6f7767232133c" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.020912 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-847c4cc679-4ktn9" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.042726 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-znqmp\" (UniqueName: \"kubernetes.io/projected/cdd10608-b72c-4025-a140-2934ba8bc27c-kube-api-access-znqmp\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.060619 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.081445 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.087731 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c996b5c45-mvl64" event={"ID":"b123ecaa-e5d2-4daf-b377-07056dd21f37","Type":"ContainerStarted","Data":"055ac28b9995519391eab1c04aae4d79a8aa50b2414fc649a4f48856d27bd4a8"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.108240 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb","Type":"ContainerStarted","Data":"946935b09230014871f9cb542cff145c9a257ebb29ea9e031a415be9d2fd2414"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.140054 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.156941 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7krdw" event={"ID":"6c4a03a4-e80d-4605-990f-a242222558bb","Type":"ContainerStarted","Data":"a77a5067d918b2848bd999323d3cfee6d37f589bfa32b87139e2566f40ee54f8"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.157977 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-config" (OuterVolumeSpecName: "config") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.158385 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.158411 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.185483 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.189274 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" event={"ID":"d421f208-6974-48b9-9d8d-abe468e07c18","Type":"ContainerStarted","Data":"85edf15f372952c6b3457892574fa91a206384e26f3013b471f848e2d4a89f01"} Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.192786 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c996b5c45-mvl64"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.283168 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-66565467f5-6r87q"] Jan 30 17:15:29 crc kubenswrapper[4712]: E0130 17:15:29.283953 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdd10608-b72c-4025-a140-2934ba8bc27c" containerName="init" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.283968 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdd10608-b72c-4025-a140-2934ba8bc27c" containerName="init" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.284305 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdd10608-b72c-4025-a140-2934ba8bc27c" containerName="init" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.294063 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.301236 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66565467f5-6r87q"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.316598 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.320596 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.343440 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.343519 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb\") pod \"cdd10608-b72c-4025-a140-2934ba8bc27c\" (UID: \"cdd10608-b72c-4025-a140-2934ba8bc27c\") " Jan 30 17:15:29 crc kubenswrapper[4712]: W0130 17:15:29.347948 4712 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cdd10608-b72c-4025-a140-2934ba8bc27c/volumes/kubernetes.io~configmap/ovsdbserver-nb Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.347977 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.350135 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.350178 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:29 crc kubenswrapper[4712]: W0130 17:15:29.355997 4712 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/cdd10608-b72c-4025-a140-2934ba8bc27c/volumes/kubernetes.io~configmap/dns-swift-storage-0 Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.356020 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cdd10608-b72c-4025-a140-2934ba8bc27c" (UID: "cdd10608-b72c-4025-a140-2934ba8bc27c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.410138 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.452252 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-scripts\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.452306 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55497986-760e-41ba-8835-7e0c7d80c1df-logs\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.452353 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55497986-760e-41ba-8835-7e0c7d80c1df-horizon-secret-key\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.452425 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-config-data\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.452458 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5h9s\" (UniqueName: \"kubernetes.io/projected/55497986-760e-41ba-8835-7e0c7d80c1df-kube-api-access-d5h9s\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.452541 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cdd10608-b72c-4025-a140-2934ba8bc27c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.542940 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4ktn9"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.557688 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5h9s\" (UniqueName: \"kubernetes.io/projected/55497986-760e-41ba-8835-7e0c7d80c1df-kube-api-access-d5h9s\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.557968 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-scripts\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.558053 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55497986-760e-41ba-8835-7e0c7d80c1df-logs\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.558158 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55497986-760e-41ba-8835-7e0c7d80c1df-horizon-secret-key\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.558283 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-config-data\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.558479 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55497986-760e-41ba-8835-7e0c7d80c1df-logs\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.558983 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-847c4cc679-4ktn9"] Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.559469 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-scripts\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.559754 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-config-data\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.564710 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55497986-760e-41ba-8835-7e0c7d80c1df-horizon-secret-key\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.580408 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5h9s\" (UniqueName: \"kubernetes.io/projected/55497986-760e-41ba-8835-7e0c7d80c1df-kube-api-access-d5h9s\") pod \"horizon-66565467f5-6r87q\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.647165 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:15:29 crc kubenswrapper[4712]: I0130 17:15:29.820636 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdd10608-b72c-4025-a140-2934ba8bc27c" path="/var/lib/kubelet/pods/cdd10608-b72c-4025-a140-2934ba8bc27c/volumes" Jan 30 17:15:30 crc kubenswrapper[4712]: I0130 17:15:30.300583 4712 generic.go:334] "Generic (PLEG): container finished" podID="d421f208-6974-48b9-9d8d-abe468e07c18" containerID="5a9677765a021b2ac0bb10f374fc8885b1893e6c3633071e44eb831583f8d8f5" exitCode=0 Jan 30 17:15:30 crc kubenswrapper[4712]: I0130 17:15:30.300682 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" event={"ID":"d421f208-6974-48b9-9d8d-abe468e07c18","Type":"ContainerDied","Data":"5a9677765a021b2ac0bb10f374fc8885b1893e6c3633071e44eb831583f8d8f5"} Jan 30 17:15:30 crc kubenswrapper[4712]: I0130 17:15:30.485964 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-66565467f5-6r87q"] Jan 30 17:15:30 crc kubenswrapper[4712]: W0130 17:15:30.560090 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55497986_760e_41ba_8835_7e0c7d80c1df.slice/crio-30989a06a722f6a5ad0c9b395358781135e34b0d94f64e8493c8f5b8160baa89 WatchSource:0}: Error finding container 30989a06a722f6a5ad0c9b395358781135e34b0d94f64e8493c8f5b8160baa89: Status 404 returned error can't find the container with id 30989a06a722f6a5ad0c9b395358781135e34b0d94f64e8493c8f5b8160baa89 Jan 30 17:15:31 crc kubenswrapper[4712]: I0130 17:15:31.350638 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb119504-1453-45ce-9a6a-65df12e3e9f8","Type":"ContainerStarted","Data":"99750ba9b4a8fd1624230b5f8052762856f3e910250a755c1ef474cc3eafdae8"} Jan 30 17:15:31 crc kubenswrapper[4712]: I0130 17:15:31.363273 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66565467f5-6r87q" event={"ID":"55497986-760e-41ba-8835-7e0c7d80c1df","Type":"ContainerStarted","Data":"30989a06a722f6a5ad0c9b395358781135e34b0d94f64e8493c8f5b8160baa89"} Jan 30 17:15:31 crc kubenswrapper[4712]: I0130 17:15:31.372053 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb","Type":"ContainerStarted","Data":"125b87dd38f81aa44459b96ce28fa9769fb328fdec556c37e5cb735fd0a41fcd"} Jan 30 17:15:31 crc kubenswrapper[4712]: I0130 17:15:31.381887 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" event={"ID":"d421f208-6974-48b9-9d8d-abe468e07c18","Type":"ContainerStarted","Data":"1ec3b768e458d6b99a2c9cd178dc132b8c34ef0df5e04ba1182fb0f0843f9d07"} Jan 30 17:15:31 crc kubenswrapper[4712]: I0130 17:15:31.382084 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:31 crc kubenswrapper[4712]: I0130 17:15:31.411841 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" podStartSLOduration=6.411761319 podStartE2EDuration="6.411761319s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:31.408536862 +0000 UTC m=+1268.315546351" watchObservedRunningTime="2026-01-30 17:15:31.411761319 +0000 UTC m=+1268.318770788" Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.392569 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb","Type":"ContainerStarted","Data":"af21ea543589007c01298c5245ab7238680d43d3a4303bba1174bb550c620524"} Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.392664 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-log" containerID="cri-o://125b87dd38f81aa44459b96ce28fa9769fb328fdec556c37e5cb735fd0a41fcd" gracePeriod=30 Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.392735 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-httpd" containerID="cri-o://af21ea543589007c01298c5245ab7238680d43d3a4303bba1174bb550c620524" gracePeriod=30 Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.397395 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb119504-1453-45ce-9a6a-65df12e3e9f8","Type":"ContainerStarted","Data":"bcfa0ea23044a61a2365c9d9e5c2a0f1fcb13f83b30496c902312e5b96aa4a68"} Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.397510 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-log" containerID="cri-o://99750ba9b4a8fd1624230b5f8052762856f3e910250a755c1ef474cc3eafdae8" gracePeriod=30 Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.397538 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-httpd" containerID="cri-o://bcfa0ea23044a61a2365c9d9e5c2a0f1fcb13f83b30496c902312e5b96aa4a68" gracePeriod=30 Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.427971 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=7.427949347 podStartE2EDuration="7.427949347s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:32.425691992 +0000 UTC m=+1269.332701461" watchObservedRunningTime="2026-01-30 17:15:32.427949347 +0000 UTC m=+1269.334958816" Jan 30 17:15:32 crc kubenswrapper[4712]: I0130 17:15:32.464535 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.464519345 podStartE2EDuration="7.464519345s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:15:32.463173103 +0000 UTC m=+1269.370182572" watchObservedRunningTime="2026-01-30 17:15:32.464519345 +0000 UTC m=+1269.371528814" Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.408907 4712 generic.go:334] "Generic (PLEG): container finished" podID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerID="af21ea543589007c01298c5245ab7238680d43d3a4303bba1174bb550c620524" exitCode=0 Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.409264 4712 generic.go:334] "Generic (PLEG): container finished" podID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerID="125b87dd38f81aa44459b96ce28fa9769fb328fdec556c37e5cb735fd0a41fcd" exitCode=143 Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.408959 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb","Type":"ContainerDied","Data":"af21ea543589007c01298c5245ab7238680d43d3a4303bba1174bb550c620524"} Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.409354 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb","Type":"ContainerDied","Data":"125b87dd38f81aa44459b96ce28fa9769fb328fdec556c37e5cb735fd0a41fcd"} Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.415355 4712 generic.go:334] "Generic (PLEG): container finished" podID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerID="bcfa0ea23044a61a2365c9d9e5c2a0f1fcb13f83b30496c902312e5b96aa4a68" exitCode=0 Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.415388 4712 generic.go:334] "Generic (PLEG): container finished" podID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerID="99750ba9b4a8fd1624230b5f8052762856f3e910250a755c1ef474cc3eafdae8" exitCode=143 Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.415417 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb119504-1453-45ce-9a6a-65df12e3e9f8","Type":"ContainerDied","Data":"bcfa0ea23044a61a2365c9d9e5c2a0f1fcb13f83b30496c902312e5b96aa4a68"} Jan 30 17:15:33 crc kubenswrapper[4712]: I0130 17:15:33.415449 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb119504-1453-45ce-9a6a-65df12e3e9f8","Type":"ContainerDied","Data":"99750ba9b4a8fd1624230b5f8052762856f3e910250a755c1ef474cc3eafdae8"} Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.676062 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78bf8d4bc-dzt7l"] Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.725199 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-56f8b66d48-7wr47"] Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.732567 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.745570 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.772475 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f8b66d48-7wr47"] Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816416 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-config-data\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816586 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-scripts\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816637 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-tls-certs\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816761 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-secret-key\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816859 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nz64\" (UniqueName: \"kubernetes.io/projected/70154dd8-9d42-4a12-af9b-1be723ef892e-kube-api-access-4nz64\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816945 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-combined-ca-bundle\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.816986 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70154dd8-9d42-4a12-af9b-1be723ef892e-logs\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.888025 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66565467f5-6r87q"] Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.918428 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-config-data\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.918615 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-scripts\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.918680 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-tls-certs\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.918918 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-secret-key\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.918951 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nz64\" (UniqueName: \"kubernetes.io/projected/70154dd8-9d42-4a12-af9b-1be723ef892e-kube-api-access-4nz64\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.919002 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-combined-ca-bundle\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.919047 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70154dd8-9d42-4a12-af9b-1be723ef892e-logs\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.919606 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70154dd8-9d42-4a12-af9b-1be723ef892e-logs\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.920264 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-scripts\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.920737 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-config-data\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.957622 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nz64\" (UniqueName: \"kubernetes.io/projected/70154dd8-9d42-4a12-af9b-1be723ef892e-kube-api-access-4nz64\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.967260 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-tls-certs\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.968174 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-secret-key\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:34 crc kubenswrapper[4712]: I0130 17:15:34.974541 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-combined-ca-bundle\") pod \"horizon-56f8b66d48-7wr47\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.011862 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-64655dbc44-pvj2c"] Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.013502 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028010 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a28b495-ecf0-409e-9558-ee794a46dbd1-scripts\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028083 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a28b495-ecf0-409e-9558-ee794a46dbd1-logs\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028126 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-horizon-secret-key\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028186 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-horizon-tls-certs\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028210 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a28b495-ecf0-409e-9558-ee794a46dbd1-config-data\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028265 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5tmk\" (UniqueName: \"kubernetes.io/projected/6a28b495-ecf0-409e-9558-ee794a46dbd1-kube-api-access-l5tmk\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.028401 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-combined-ca-bundle\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.035364 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64655dbc44-pvj2c"] Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.071717 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.131018 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-combined-ca-bundle\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.131357 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a28b495-ecf0-409e-9558-ee794a46dbd1-scripts\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.131483 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a28b495-ecf0-409e-9558-ee794a46dbd1-logs\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.132069 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a28b495-ecf0-409e-9558-ee794a46dbd1-logs\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.132112 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-horizon-secret-key\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.132187 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-horizon-tls-certs\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.132211 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a28b495-ecf0-409e-9558-ee794a46dbd1-config-data\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.132299 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6a28b495-ecf0-409e-9558-ee794a46dbd1-scripts\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.132675 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5tmk\" (UniqueName: \"kubernetes.io/projected/6a28b495-ecf0-409e-9558-ee794a46dbd1-kube-api-access-l5tmk\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.133161 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/6a28b495-ecf0-409e-9558-ee794a46dbd1-config-data\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.137982 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-horizon-secret-key\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.141432 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-combined-ca-bundle\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.172501 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5tmk\" (UniqueName: \"kubernetes.io/projected/6a28b495-ecf0-409e-9558-ee794a46dbd1-kube-api-access-l5tmk\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.175284 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/6a28b495-ecf0-409e-9558-ee794a46dbd1-horizon-tls-certs\") pod \"horizon-64655dbc44-pvj2c\" (UID: \"6a28b495-ecf0-409e-9558-ee794a46dbd1\") " pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:35 crc kubenswrapper[4712]: I0130 17:15:35.352210 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:15:36 crc kubenswrapper[4712]: I0130 17:15:36.459453 4712 generic.go:334] "Generic (PLEG): container finished" podID="89d325b5-bb94-4295-a169-465b4b0b73be" containerID="6b1338b18a4ad5f3e9405fd9439035f013b0591a978c5fddbbb84b304a0b47e1" exitCode=0 Jan 30 17:15:36 crc kubenswrapper[4712]: I0130 17:15:36.459518 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkk2v" event={"ID":"89d325b5-bb94-4295-a169-465b4b0b73be","Type":"ContainerDied","Data":"6b1338b18a4ad5f3e9405fd9439035f013b0591a978c5fddbbb84b304a0b47e1"} Jan 30 17:15:36 crc kubenswrapper[4712]: I0130 17:15:36.484969 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:15:36 crc kubenswrapper[4712]: I0130 17:15:36.565891 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-99pvq"] Jan 30 17:15:36 crc kubenswrapper[4712]: I0130 17:15:36.566179 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" containerID="cri-o://f4760433c4d3e595ab4f1bbe427f619cf5766be16385fe803af0980ff8735001" gracePeriod=10 Jan 30 17:15:37 crc kubenswrapper[4712]: I0130 17:15:37.488680 4712 generic.go:334] "Generic (PLEG): container finished" podID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerID="f4760433c4d3e595ab4f1bbe427f619cf5766be16385fe803af0980ff8735001" exitCode=0 Jan 30 17:15:37 crc kubenswrapper[4712]: I0130 17:15:37.488967 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" event={"ID":"cd22ab3c-d638-45c9-b107-6c46494b1343","Type":"ContainerDied","Data":"f4760433c4d3e595ab4f1bbe427f619cf5766be16385fe803af0980ff8735001"} Jan 30 17:15:43 crc kubenswrapper[4712]: E0130 17:15:43.361249 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 30 17:15:43 crc kubenswrapper[4712]: E0130 17:15:43.361923 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695hb6h5dfh685h54chbdh686hbh64h9bh567h59dh5fh8dh57fh5bh59chb7h664h5b4h565h8dhc8h548hb4h5dfhd7h577h585h688h684hb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62p8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3896ac30-4d2d-4bc2-bfc3-4352d7d586de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.435281 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.463450 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.552083 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb","Type":"ContainerDied","Data":"946935b09230014871f9cb542cff145c9a257ebb29ea9e031a415be9d2fd2414"} Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.552133 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.552169 4712 scope.go:117] "RemoveContainer" containerID="af21ea543589007c01298c5245ab7238680d43d3a4303bba1174bb550c620524" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.602806 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.602895 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-httpd-run\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.602984 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd29c\" (UniqueName: \"kubernetes.io/projected/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-kube-api-access-xd29c\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603030 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-scripts\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603059 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-config-data\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603115 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-internal-tls-certs\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603165 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-combined-ca-bundle\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603195 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603229 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-logs\") pod \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\" (UID: \"2cfdc69e-1ea0-41a8-ba76-7f9942885fcb\") " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.603633 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.604400 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-logs" (OuterVolumeSpecName: "logs") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.609131 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.627288 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-kube-api-access-xd29c" (OuterVolumeSpecName: "kube-api-access-xd29c") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "kube-api-access-xd29c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.627437 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-scripts" (OuterVolumeSpecName: "scripts") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.655114 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.667485 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-config-data" (OuterVolumeSpecName: "config-data") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.672833 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" (UID: "2cfdc69e-1ea0-41a8-ba76-7f9942885fcb"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705401 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd29c\" (UniqueName: \"kubernetes.io/projected/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-kube-api-access-xd29c\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705433 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705443 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705452 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705460 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705468 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.705501 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.731491 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.807376 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.881533 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.909823 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.920604 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:43 crc kubenswrapper[4712]: E0130 17:15:43.921079 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-log" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.921095 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-log" Jan 30 17:15:43 crc kubenswrapper[4712]: E0130 17:15:43.921126 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-httpd" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.921134 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-httpd" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.921353 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-log" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.921372 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" containerName="glance-httpd" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.922433 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.928880 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.929126 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:15:43 crc kubenswrapper[4712]: I0130 17:15:43.930300 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113666 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113750 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113806 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113841 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113864 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113879 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-logs\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113897 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvbkf\" (UniqueName: \"kubernetes.io/projected/20ecbbdb-700e-4050-973f-bb7a19df3869-kube-api-access-lvbkf\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.113940 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.215438 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.215549 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.215709 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.215755 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.215835 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.215861 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.216291 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.216350 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-logs\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.216371 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvbkf\" (UniqueName: \"kubernetes.io/projected/20ecbbdb-700e-4050-973f-bb7a19df3869-kube-api-access-lvbkf\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.216587 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-logs\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.216625 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.223506 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.225717 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-config-data\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.226260 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-scripts\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.230708 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.235646 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvbkf\" (UniqueName: \"kubernetes.io/projected/20ecbbdb-700e-4050-973f-bb7a19df3869-kube-api-access-lvbkf\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.256450 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:15:44 crc kubenswrapper[4712]: I0130 17:15:44.549482 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:15:45 crc kubenswrapper[4712]: I0130 17:15:45.813723 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2cfdc69e-1ea0-41a8-ba76-7f9942885fcb" path="/var/lib/kubelet/pods/2cfdc69e-1ea0-41a8-ba76-7f9942885fcb/volumes" Jan 30 17:15:48 crc kubenswrapper[4712]: I0130 17:15:48.435649 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 30 17:15:48 crc kubenswrapper[4712]: E0130 17:15:48.908275 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 17:15:48 crc kubenswrapper[4712]: E0130 17:15:48.908773 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5bdh69h79h56ch66bh98hf4h9dhbch594h679hch5d8h648h66fhf9h5hf4h5dbh5f4h54bhf5h5dhc4h5cfhf8h58dh644h659hbfh5f6h657q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qxbn4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-5c996b5c45-mvl64_openstack(b123ecaa-e5d2-4daf-b377-07056dd21f37): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:15:48 crc kubenswrapper[4712]: E0130 17:15:48.914183 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-5c996b5c45-mvl64" podUID="b123ecaa-e5d2-4daf-b377-07056dd21f37" Jan 30 17:15:48 crc kubenswrapper[4712]: E0130 17:15:48.916838 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 17:15:48 crc kubenswrapper[4712]: E0130 17:15:48.917142 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n65dh5fbh64h57bh66ch688h7h5dch9bhd5h75hdfh65ch57chc8h695hcfh698hc4hdbh675h7ch69h54dh4h689h64fh66h5h5c7h5f4h58q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d5h9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-66565467f5-6r87q_openstack(55497986-760e-41ba-8835-7e0c7d80c1df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:15:48 crc kubenswrapper[4712]: E0130 17:15:48.921080 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-66565467f5-6r87q" podUID="55497986-760e-41ba-8835-7e0c7d80c1df" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.008765 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.021767 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.113656 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-fernet-keys\") pod \"89d325b5-bb94-4295-a169-465b4b0b73be\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.113828 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-scripts\") pod \"89d325b5-bb94-4295-a169-465b4b0b73be\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.113874 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-combined-ca-bundle\") pod \"89d325b5-bb94-4295-a169-465b4b0b73be\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.113932 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-credential-keys\") pod \"89d325b5-bb94-4295-a169-465b4b0b73be\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.113974 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5wp8\" (UniqueName: \"kubernetes.io/projected/89d325b5-bb94-4295-a169-465b4b0b73be-kube-api-access-g5wp8\") pod \"89d325b5-bb94-4295-a169-465b4b0b73be\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.114020 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-config-data\") pod \"89d325b5-bb94-4295-a169-465b4b0b73be\" (UID: \"89d325b5-bb94-4295-a169-465b4b0b73be\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.120493 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "89d325b5-bb94-4295-a169-465b4b0b73be" (UID: "89d325b5-bb94-4295-a169-465b4b0b73be"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.121005 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "89d325b5-bb94-4295-a169-465b4b0b73be" (UID: "89d325b5-bb94-4295-a169-465b4b0b73be"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.121602 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89d325b5-bb94-4295-a169-465b4b0b73be-kube-api-access-g5wp8" (OuterVolumeSpecName: "kube-api-access-g5wp8") pod "89d325b5-bb94-4295-a169-465b4b0b73be" (UID: "89d325b5-bb94-4295-a169-465b4b0b73be"). InnerVolumeSpecName "kube-api-access-g5wp8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.137604 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-scripts" (OuterVolumeSpecName: "scripts") pod "89d325b5-bb94-4295-a169-465b4b0b73be" (UID: "89d325b5-bb94-4295-a169-465b4b0b73be"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.147123 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89d325b5-bb94-4295-a169-465b4b0b73be" (UID: "89d325b5-bb94-4295-a169-465b4b0b73be"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.151357 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-config-data" (OuterVolumeSpecName: "config-data") pod "89d325b5-bb94-4295-a169-465b4b0b73be" (UID: "89d325b5-bb94-4295-a169-465b4b0b73be"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.216838 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-combined-ca-bundle\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.216938 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-logs\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.216971 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-scripts\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.216985 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-config-data\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217009 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-httpd-run\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217025 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-public-tls-certs\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217053 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217094 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9xzd\" (UniqueName: \"kubernetes.io/projected/fb119504-1453-45ce-9a6a-65df12e3e9f8-kube-api-access-d9xzd\") pod \"fb119504-1453-45ce-9a6a-65df12e3e9f8\" (UID: \"fb119504-1453-45ce-9a6a-65df12e3e9f8\") " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217480 4712 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217501 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5wp8\" (UniqueName: \"kubernetes.io/projected/89d325b5-bb94-4295-a169-465b4b0b73be-kube-api-access-g5wp8\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217511 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217519 4712 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217527 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.217535 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89d325b5-bb94-4295-a169-465b4b0b73be-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.218081 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.218169 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-logs" (OuterVolumeSpecName: "logs") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.222280 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb119504-1453-45ce-9a6a-65df12e3e9f8-kube-api-access-d9xzd" (OuterVolumeSpecName: "kube-api-access-d9xzd") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "kube-api-access-d9xzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.223341 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-scripts" (OuterVolumeSpecName: "scripts") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.224570 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.244768 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.265187 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.275277 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-config-data" (OuterVolumeSpecName: "config-data") pod "fb119504-1453-45ce-9a6a-65df12e3e9f8" (UID: "fb119504-1453-45ce-9a6a-65df12e3e9f8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318726 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318757 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318770 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318780 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fb119504-1453-45ce-9a6a-65df12e3e9f8-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318809 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318844 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318854 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9xzd\" (UniqueName: \"kubernetes.io/projected/fb119504-1453-45ce-9a6a-65df12e3e9f8-kube-api-access-d9xzd\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.318864 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb119504-1453-45ce-9a6a-65df12e3e9f8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.337541 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.420931 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.609957 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fb119504-1453-45ce-9a6a-65df12e3e9f8","Type":"ContainerDied","Data":"f7507ae454c4439b2e43b54ee34ec531fb7beff71fef3b8d456f5715539ee6ef"} Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.610320 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.619067 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-fkk2v" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.620440 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-fkk2v" event={"ID":"89d325b5-bb94-4295-a169-465b4b0b73be","Type":"ContainerDied","Data":"9cca286032902c048b876a5605837eadd931223fb3ba3724d86ecfe75b7e7dde"} Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.620469 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cca286032902c048b876a5605837eadd931223fb3ba3724d86ecfe75b7e7dde" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.679224 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.692339 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.711409 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:49 crc kubenswrapper[4712]: E0130 17:15:49.715431 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-log" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.715450 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-log" Jan 30 17:15:49 crc kubenswrapper[4712]: E0130 17:15:49.715461 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-httpd" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.715468 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-httpd" Jan 30 17:15:49 crc kubenswrapper[4712]: E0130 17:15:49.715498 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89d325b5-bb94-4295-a169-465b4b0b73be" containerName="keystone-bootstrap" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.715505 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="89d325b5-bb94-4295-a169-465b4b0b73be" containerName="keystone-bootstrap" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.715676 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-httpd" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.715690 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" containerName="glance-log" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.715705 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="89d325b5-bb94-4295-a169-465b4b0b73be" containerName="keystone-bootstrap" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.716535 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.720889 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.720987 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.728777 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.814409 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb119504-1453-45ce-9a6a-65df12e3e9f8" path="/var/lib/kubelet/pods/fb119504-1453-45ce-9a6a-65df12e3e9f8/volumes" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835196 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835265 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-logs\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835322 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835368 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvn5s\" (UniqueName: \"kubernetes.io/projected/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-kube-api-access-zvn5s\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835410 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835425 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835443 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.835463 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937013 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-logs\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937113 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937182 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvn5s\" (UniqueName: \"kubernetes.io/projected/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-kube-api-access-zvn5s\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937265 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937287 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937313 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937342 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.937393 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.938853 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.939766 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-logs\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.940410 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.957748 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.964655 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.965173 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-scripts\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.967338 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvn5s\" (UniqueName: \"kubernetes.io/projected/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-kube-api-access-zvn5s\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:49 crc kubenswrapper[4712]: I0130 17:15:49.971590 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-config-data\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.011814 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " pod="openstack/glance-default-external-api-0" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.055647 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.152166 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-fkk2v"] Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.163845 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-fkk2v"] Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.204674 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-p8pht"] Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.211282 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.213507 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.214407 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.214545 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dxmtz" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.214684 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.216717 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.219413 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p8pht"] Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.351196 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-config-data\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.351310 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-scripts\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.351356 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-fernet-keys\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.351424 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-combined-ca-bundle\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.351451 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-credential-keys\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.351530 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58wf\" (UniqueName: \"kubernetes.io/projected/ef70cf25-e984-4397-b60e-78199d8f41bf-kube-api-access-n58wf\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.452668 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-config-data\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.452726 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-scripts\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.452753 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-fernet-keys\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.452817 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-combined-ca-bundle\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.452836 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-credential-keys\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.452888 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58wf\" (UniqueName: \"kubernetes.io/projected/ef70cf25-e984-4397-b60e-78199d8f41bf-kube-api-access-n58wf\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.457590 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-fernet-keys\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.458553 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-combined-ca-bundle\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.460612 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-config-data\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.465931 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-credential-keys\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.466292 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-scripts\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.474399 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58wf\" (UniqueName: \"kubernetes.io/projected/ef70cf25-e984-4397-b60e-78199d8f41bf-kube-api-access-n58wf\") pod \"keystone-bootstrap-p8pht\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:50 crc kubenswrapper[4712]: I0130 17:15:50.537502 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:15:51 crc kubenswrapper[4712]: E0130 17:15:51.469575 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-horizon:current-podified" Jan 30 17:15:51 crc kubenswrapper[4712]: E0130 17:15:51.470128 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:horizon-log,Image:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,Command:[/bin/bash],Args:[-c tail -n+1 -F /var/log/horizon/horizon.log],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n5c8h656hdfh8dh5d8h598h64ch68ch5fchddh5d7h6ch665h65h5ffh89h5f4h588h5c9h686h65fh98h5d6h79h5cchcfh579h5dfh55fh6h5d6h587q,ValueFrom:nil,},EnvVar{Name:ENABLE_DESIGNATE,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_HEAT,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_IRONIC,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_MANILA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_OCTAVIA,Value:yes,ValueFrom:nil,},EnvVar{Name:ENABLE_WATCHER,Value:no,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},EnvVar{Name:UNPACK_THEME,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:logs,ReadOnly:false,MountPath:/var/log/horizon,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pfwv6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*48,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*42400,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-78bf8d4bc-dzt7l_openstack(9d98cb77-f784-431c-bd65-35261f546cd0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:15:51 crc kubenswrapper[4712]: E0130 17:15:51.473159 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"horizon-log\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"horizon\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-horizon:current-podified\\\"\"]" pod="openstack/horizon-78bf8d4bc-dzt7l" podUID="9d98cb77-f784-431c-bd65-35261f546cd0" Jan 30 17:15:51 crc kubenswrapper[4712]: I0130 17:15:51.833307 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89d325b5-bb94-4295-a169-465b4b0b73be" path="/var/lib/kubelet/pods/89d325b5-bb94-4295-a169-465b4b0b73be/volumes" Jan 30 17:15:53 crc kubenswrapper[4712]: I0130 17:15:53.436290 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 30 17:15:53 crc kubenswrapper[4712]: I0130 17:15:53.437006 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.437483 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.548199 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.727997 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-swift-storage-0\") pod \"cd22ab3c-d638-45c9-b107-6c46494b1343\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.728080 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-config\") pod \"cd22ab3c-d638-45c9-b107-6c46494b1343\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.728139 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-sb\") pod \"cd22ab3c-d638-45c9-b107-6c46494b1343\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.728168 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqvg8\" (UniqueName: \"kubernetes.io/projected/cd22ab3c-d638-45c9-b107-6c46494b1343-kube-api-access-xqvg8\") pod \"cd22ab3c-d638-45c9-b107-6c46494b1343\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.728190 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-nb\") pod \"cd22ab3c-d638-45c9-b107-6c46494b1343\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.728236 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-svc\") pod \"cd22ab3c-d638-45c9-b107-6c46494b1343\" (UID: \"cd22ab3c-d638-45c9-b107-6c46494b1343\") " Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.730149 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" event={"ID":"cd22ab3c-d638-45c9-b107-6c46494b1343","Type":"ContainerDied","Data":"7635793d9cb6c3e3527a0a8cea742fa89e935565ebfcf7e55a01e7df10b4f18d"} Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.730274 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.741870 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd22ab3c-d638-45c9-b107-6c46494b1343-kube-api-access-xqvg8" (OuterVolumeSpecName: "kube-api-access-xqvg8") pod "cd22ab3c-d638-45c9-b107-6c46494b1343" (UID: "cd22ab3c-d638-45c9-b107-6c46494b1343"). InnerVolumeSpecName "kube-api-access-xqvg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.778319 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-config" (OuterVolumeSpecName: "config") pod "cd22ab3c-d638-45c9-b107-6c46494b1343" (UID: "cd22ab3c-d638-45c9-b107-6c46494b1343"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.786304 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "cd22ab3c-d638-45c9-b107-6c46494b1343" (UID: "cd22ab3c-d638-45c9-b107-6c46494b1343"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.786362 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "cd22ab3c-d638-45c9-b107-6c46494b1343" (UID: "cd22ab3c-d638-45c9-b107-6c46494b1343"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.789522 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "cd22ab3c-d638-45c9-b107-6c46494b1343" (UID: "cd22ab3c-d638-45c9-b107-6c46494b1343"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.793473 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "cd22ab3c-d638-45c9-b107-6c46494b1343" (UID: "cd22ab3c-d638-45c9-b107-6c46494b1343"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.830261 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.830309 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.830319 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.830329 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqvg8\" (UniqueName: \"kubernetes.io/projected/cd22ab3c-d638-45c9-b107-6c46494b1343-kube-api-access-xqvg8\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.830339 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:58 crc kubenswrapper[4712]: I0130 17:15:58.830365 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/cd22ab3c-d638-45c9-b107-6c46494b1343-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:15:59 crc kubenswrapper[4712]: I0130 17:15:59.063655 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-99pvq"] Jan 30 17:15:59 crc kubenswrapper[4712]: I0130 17:15:59.072477 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6bcbc87-99pvq"] Jan 30 17:15:59 crc kubenswrapper[4712]: I0130 17:15:59.810039 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" path="/var/lib/kubelet/pods/cd22ab3c-d638-45c9-b107-6c46494b1343/volumes" Jan 30 17:16:03 crc kubenswrapper[4712]: I0130 17:16:03.438397 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-74f6bcbc87-99pvq" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.140:5353: i/o timeout" Jan 30 17:16:06 crc kubenswrapper[4712]: I0130 17:16:06.806229 4712 generic.go:334] "Generic (PLEG): container finished" podID="67221ffc-37c6-458b-b4b4-26ef6e628c0b" containerID="aff21b13d905c3dcc1d105927345076671d1cf6986b7a1c1afe3b22e3961b9e2" exitCode=0 Jan 30 17:16:06 crc kubenswrapper[4712]: I0130 17:16:06.806318 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ldhgd" event={"ID":"67221ffc-37c6-458b-b4b4-26ef6e628c0b","Type":"ContainerDied","Data":"aff21b13d905c3dcc1d105927345076671d1cf6986b7a1c1afe3b22e3961b9e2"} Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.834447 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.841288 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.872514 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-66565467f5-6r87q" event={"ID":"55497986-760e-41ba-8835-7e0c7d80c1df","Type":"ContainerDied","Data":"30989a06a722f6a5ad0c9b395358781135e34b0d94f64e8493c8f5b8160baa89"} Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.872622 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-66565467f5-6r87q" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.878429 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-5c996b5c45-mvl64" event={"ID":"b123ecaa-e5d2-4daf-b377-07056dd21f37","Type":"ContainerDied","Data":"055ac28b9995519391eab1c04aae4d79a8aa50b2414fc649a4f48856d27bd4a8"} Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.878520 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-5c996b5c45-mvl64" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.938949 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b123ecaa-e5d2-4daf-b377-07056dd21f37-logs\") pod \"b123ecaa-e5d2-4daf-b377-07056dd21f37\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939005 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-scripts\") pod \"b123ecaa-e5d2-4daf-b377-07056dd21f37\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939083 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-config-data\") pod \"55497986-760e-41ba-8835-7e0c7d80c1df\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939115 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-config-data\") pod \"b123ecaa-e5d2-4daf-b377-07056dd21f37\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939207 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55497986-760e-41ba-8835-7e0c7d80c1df-horizon-secret-key\") pod \"55497986-760e-41ba-8835-7e0c7d80c1df\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939229 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5h9s\" (UniqueName: \"kubernetes.io/projected/55497986-760e-41ba-8835-7e0c7d80c1df-kube-api-access-d5h9s\") pod \"55497986-760e-41ba-8835-7e0c7d80c1df\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939254 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55497986-760e-41ba-8835-7e0c7d80c1df-logs\") pod \"55497986-760e-41ba-8835-7e0c7d80c1df\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939304 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-scripts\") pod \"55497986-760e-41ba-8835-7e0c7d80c1df\" (UID: \"55497986-760e-41ba-8835-7e0c7d80c1df\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939334 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxbn4\" (UniqueName: \"kubernetes.io/projected/b123ecaa-e5d2-4daf-b377-07056dd21f37-kube-api-access-qxbn4\") pod \"b123ecaa-e5d2-4daf-b377-07056dd21f37\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939323 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b123ecaa-e5d2-4daf-b377-07056dd21f37-logs" (OuterVolumeSpecName: "logs") pod "b123ecaa-e5d2-4daf-b377-07056dd21f37" (UID: "b123ecaa-e5d2-4daf-b377-07056dd21f37"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939368 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b123ecaa-e5d2-4daf-b377-07056dd21f37-horizon-secret-key\") pod \"b123ecaa-e5d2-4daf-b377-07056dd21f37\" (UID: \"b123ecaa-e5d2-4daf-b377-07056dd21f37\") " Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.939883 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b123ecaa-e5d2-4daf-b377-07056dd21f37-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.940462 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-scripts" (OuterVolumeSpecName: "scripts") pod "55497986-760e-41ba-8835-7e0c7d80c1df" (UID: "55497986-760e-41ba-8835-7e0c7d80c1df"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.940757 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55497986-760e-41ba-8835-7e0c7d80c1df-logs" (OuterVolumeSpecName: "logs") pod "55497986-760e-41ba-8835-7e0c7d80c1df" (UID: "55497986-760e-41ba-8835-7e0c7d80c1df"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.941081 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-scripts" (OuterVolumeSpecName: "scripts") pod "b123ecaa-e5d2-4daf-b377-07056dd21f37" (UID: "b123ecaa-e5d2-4daf-b377-07056dd21f37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.941219 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-config-data" (OuterVolumeSpecName: "config-data") pod "55497986-760e-41ba-8835-7e0c7d80c1df" (UID: "55497986-760e-41ba-8835-7e0c7d80c1df"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.941475 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-config-data" (OuterVolumeSpecName: "config-data") pod "b123ecaa-e5d2-4daf-b377-07056dd21f37" (UID: "b123ecaa-e5d2-4daf-b377-07056dd21f37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.946764 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b123ecaa-e5d2-4daf-b377-07056dd21f37-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b123ecaa-e5d2-4daf-b377-07056dd21f37" (UID: "b123ecaa-e5d2-4daf-b377-07056dd21f37"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.947017 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b123ecaa-e5d2-4daf-b377-07056dd21f37-kube-api-access-qxbn4" (OuterVolumeSpecName: "kube-api-access-qxbn4") pod "b123ecaa-e5d2-4daf-b377-07056dd21f37" (UID: "b123ecaa-e5d2-4daf-b377-07056dd21f37"). InnerVolumeSpecName "kube-api-access-qxbn4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.947083 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55497986-760e-41ba-8835-7e0c7d80c1df-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "55497986-760e-41ba-8835-7e0c7d80c1df" (UID: "55497986-760e-41ba-8835-7e0c7d80c1df"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:12 crc kubenswrapper[4712]: I0130 17:16:12.953953 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55497986-760e-41ba-8835-7e0c7d80c1df-kube-api-access-d5h9s" (OuterVolumeSpecName: "kube-api-access-d5h9s") pod "55497986-760e-41ba-8835-7e0c7d80c1df" (UID: "55497986-760e-41ba-8835-7e0c7d80c1df"). InnerVolumeSpecName "kube-api-access-d5h9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042069 4712 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b123ecaa-e5d2-4daf-b377-07056dd21f37-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042108 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042121 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042132 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b123ecaa-e5d2-4daf-b377-07056dd21f37-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042143 4712 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/55497986-760e-41ba-8835-7e0c7d80c1df-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042153 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5h9s\" (UniqueName: \"kubernetes.io/projected/55497986-760e-41ba-8835-7e0c7d80c1df-kube-api-access-d5h9s\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042340 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/55497986-760e-41ba-8835-7e0c7d80c1df-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042350 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/55497986-760e-41ba-8835-7e0c7d80c1df-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.042362 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxbn4\" (UniqueName: \"kubernetes.io/projected/b123ecaa-e5d2-4daf-b377-07056dd21f37-kube-api-access-qxbn4\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.275013 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-66565467f5-6r87q"] Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.286170 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-66565467f5-6r87q"] Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.305017 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-5c996b5c45-mvl64"] Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.315767 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-5c996b5c45-mvl64"] Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.424380 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.424557 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7jjvf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-7krdw_openstack(6c4a03a4-e80d-4605-990f-a242222558bb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.425835 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-7krdw" podUID="6c4a03a4-e80d-4605-990f-a242222558bb" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.463298 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.553189 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d98cb77-f784-431c-bd65-35261f546cd0-logs\") pod \"9d98cb77-f784-431c-bd65-35261f546cd0\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.553517 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d98cb77-f784-431c-bd65-35261f546cd0-logs" (OuterVolumeSpecName: "logs") pod "9d98cb77-f784-431c-bd65-35261f546cd0" (UID: "9d98cb77-f784-431c-bd65-35261f546cd0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.553575 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-config-data\") pod \"9d98cb77-f784-431c-bd65-35261f546cd0\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.553825 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfwv6\" (UniqueName: \"kubernetes.io/projected/9d98cb77-f784-431c-bd65-35261f546cd0-kube-api-access-pfwv6\") pod \"9d98cb77-f784-431c-bd65-35261f546cd0\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.553863 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d98cb77-f784-431c-bd65-35261f546cd0-horizon-secret-key\") pod \"9d98cb77-f784-431c-bd65-35261f546cd0\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.553937 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-scripts\") pod \"9d98cb77-f784-431c-bd65-35261f546cd0\" (UID: \"9d98cb77-f784-431c-bd65-35261f546cd0\") " Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.554129 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-config-data" (OuterVolumeSpecName: "config-data") pod "9d98cb77-f784-431c-bd65-35261f546cd0" (UID: "9d98cb77-f784-431c-bd65-35261f546cd0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.554782 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-scripts" (OuterVolumeSpecName: "scripts") pod "9d98cb77-f784-431c-bd65-35261f546cd0" (UID: "9d98cb77-f784-431c-bd65-35261f546cd0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.554997 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.555020 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9d98cb77-f784-431c-bd65-35261f546cd0-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.555031 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9d98cb77-f784-431c-bd65-35261f546cd0-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.557720 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d98cb77-f784-431c-bd65-35261f546cd0-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "9d98cb77-f784-431c-bd65-35261f546cd0" (UID: "9d98cb77-f784-431c-bd65-35261f546cd0"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.558257 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d98cb77-f784-431c-bd65-35261f546cd0-kube-api-access-pfwv6" (OuterVolumeSpecName: "kube-api-access-pfwv6") pod "9d98cb77-f784-431c-bd65-35261f546cd0" (UID: "9d98cb77-f784-431c-bd65-35261f546cd0"). InnerVolumeSpecName "kube-api-access-pfwv6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.656966 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfwv6\" (UniqueName: \"kubernetes.io/projected/9d98cb77-f784-431c-bd65-35261f546cd0-kube-api-access-pfwv6\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.657001 4712 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/9d98cb77-f784-431c-bd65-35261f546cd0-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.777004 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.777181 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hg828,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-9gcv2_openstack(3c24ed25-f06f-494d-9fd5-2077c052db31): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.778336 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-9gcv2" podUID="3c24ed25-f06f-494d-9fd5-2077c052db31" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.826338 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55497986-760e-41ba-8835-7e0c7d80c1df" path="/var/lib/kubelet/pods/55497986-760e-41ba-8835-7e0c7d80c1df/volumes" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.826830 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b123ecaa-e5d2-4daf-b377-07056dd21f37" path="/var/lib/kubelet/pods/b123ecaa-e5d2-4daf-b377-07056dd21f37/volumes" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.886309 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-78bf8d4bc-dzt7l" event={"ID":"9d98cb77-f784-431c-bd65-35261f546cd0","Type":"ContainerDied","Data":"bbae3b33176941b0a17f937aa1f7e9f9ff5f53be805ee0404d21e4d176e80780"} Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.886320 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-78bf8d4bc-dzt7l" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.888492 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-ldhgd" event={"ID":"67221ffc-37c6-458b-b4b4-26ef6e628c0b","Type":"ContainerDied","Data":"b0d2e9a2cb007779681efc1143ad174629fa5c75d0466a1989bf68319678383a"} Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.888531 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0d2e9a2cb007779681efc1143ad174629fa5c75d0466a1989bf68319678383a" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.889698 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-7krdw" podUID="6c4a03a4-e80d-4605-990f-a242222558bb" Jan 30 17:16:13 crc kubenswrapper[4712]: E0130 17:16:13.890317 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-9gcv2" podUID="3c24ed25-f06f-494d-9fd5-2077c052db31" Jan 30 17:16:13 crc kubenswrapper[4712]: I0130 17:16:13.919823 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.009253 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-78bf8d4bc-dzt7l"] Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.016508 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-78bf8d4bc-dzt7l"] Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.075775 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-config\") pod \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.075841 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-combined-ca-bundle\") pod \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.080398 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk2g8\" (UniqueName: \"kubernetes.io/projected/67221ffc-37c6-458b-b4b4-26ef6e628c0b-kube-api-access-nk2g8\") pod \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\" (UID: \"67221ffc-37c6-458b-b4b4-26ef6e628c0b\") " Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.107710 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-config" (OuterVolumeSpecName: "config") pod "67221ffc-37c6-458b-b4b4-26ef6e628c0b" (UID: "67221ffc-37c6-458b-b4b4-26ef6e628c0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.119473 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67221ffc-37c6-458b-b4b4-26ef6e628c0b-kube-api-access-nk2g8" (OuterVolumeSpecName: "kube-api-access-nk2g8") pod "67221ffc-37c6-458b-b4b4-26ef6e628c0b" (UID: "67221ffc-37c6-458b-b4b4-26ef6e628c0b"). InnerVolumeSpecName "kube-api-access-nk2g8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.134517 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "67221ffc-37c6-458b-b4b4-26ef6e628c0b" (UID: "67221ffc-37c6-458b-b4b4-26ef6e628c0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.183100 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk2g8\" (UniqueName: \"kubernetes.io/projected/67221ffc-37c6-458b-b4b4-26ef6e628c0b-kube-api-access-nk2g8\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.183133 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.183144 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/67221ffc-37c6-458b-b4b4-26ef6e628c0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:14 crc kubenswrapper[4712]: I0130 17:16:14.898283 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-ldhgd" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.120034 4712 scope.go:117] "RemoveContainer" containerID="125b87dd38f81aa44459b96ce28fa9769fb328fdec556c37e5cb735fd0a41fcd" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.168186 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.168359 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jzf52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-78jqx_openstack(2ef9729d-cbbc-4354-98e4-a9e07651518e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.169557 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-78jqx" podUID="2ef9729d-cbbc-4354-98e4-a9e07651518e" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.234377 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kfjwh"] Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.234728 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="init" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.234739 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="init" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.234750 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.234757 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.234771 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67221ffc-37c6-458b-b4b4-26ef6e628c0b" containerName="neutron-db-sync" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.234777 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="67221ffc-37c6-458b-b4b4-26ef6e628c0b" containerName="neutron-db-sync" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.234953 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd22ab3c-d638-45c9-b107-6c46494b1343" containerName="dnsmasq-dns" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.234973 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="67221ffc-37c6-458b-b4b4-26ef6e628c0b" containerName="neutron-db-sync" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.235766 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.248042 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kfjwh"] Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.316174 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5bcf445ccb-bcbn6"] Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.322451 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.327126 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.327314 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.327500 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.337524 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-bld2f" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.366492 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bcf445ccb-bcbn6"] Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406052 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rfg\" (UniqueName: \"kubernetes.io/projected/c5c55ed2-b2de-42e8-865c-81436c478565-kube-api-access-27rfg\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406112 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-config\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406380 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-config\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406408 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-ovndb-tls-certs\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406428 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtb8z\" (UniqueName: \"kubernetes.io/projected/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-kube-api-access-rtb8z\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406469 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406492 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406528 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-httpd-config\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406553 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406585 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-combined-ca-bundle\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.406613 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509540 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509628 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509689 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509705 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-httpd-config\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509740 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-combined-ca-bundle\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509771 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509864 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27rfg\" (UniqueName: \"kubernetes.io/projected/c5c55ed2-b2de-42e8-865c-81436c478565-kube-api-access-27rfg\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509896 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-config\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509968 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-config\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.509993 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-ovndb-tls-certs\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.510010 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rtb8z\" (UniqueName: \"kubernetes.io/projected/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-kube-api-access-rtb8z\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.510439 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-svc\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.510898 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.511042 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.511920 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-config\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.512099 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.518011 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-ovndb-tls-certs\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.523369 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-combined-ca-bundle\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.523712 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-httpd-config\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.525242 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-config\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.533925 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rtb8z\" (UniqueName: \"kubernetes.io/projected/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-kube-api-access-rtb8z\") pod \"neutron-5bcf445ccb-bcbn6\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.542504 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27rfg\" (UniqueName: \"kubernetes.io/projected/c5c55ed2-b2de-42e8-865c-81436c478565-kube-api-access-27rfg\") pod \"dnsmasq-dns-55f844cf75-kfjwh\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.580192 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.657186 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.812099 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d98cb77-f784-431c-bd65-35261f546cd0" path="/var/lib/kubelet/pods/9d98cb77-f784-431c-bd65-35261f546cd0/volumes" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.823780 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.823951 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-notification-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n695hb6h5dfh685h54chbdh686hbh64h9bh567h59dh5fh8dh57fh5bh59chb7h664h5b4h565h8dhc8h548hb4h5dfhd7h577h585h688h684hb8q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-notification-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62p8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/notificationhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3896ac30-4d2d-4bc2-bfc3-4352d7d586de): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:16:15 crc kubenswrapper[4712]: E0130 17:16:15.963017 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-78jqx" podUID="2ef9729d-cbbc-4354-98e4-a9e07651518e" Jan 30 17:16:15 crc kubenswrapper[4712]: I0130 17:16:15.963308 4712 scope.go:117] "RemoveContainer" containerID="bcfa0ea23044a61a2365c9d9e5c2a0f1fcb13f83b30496c902312e5b96aa4a68" Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.105687 4712 scope.go:117] "RemoveContainer" containerID="99750ba9b4a8fd1624230b5f8052762856f3e910250a755c1ef474cc3eafdae8" Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.249697 4712 scope.go:117] "RemoveContainer" containerID="f4760433c4d3e595ab4f1bbe427f619cf5766be16385fe803af0980ff8735001" Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.302051 4712 scope.go:117] "RemoveContainer" containerID="fdd983c4f9b1c3eecfb7d3092b3771e39699da6f6a0e41d60aa0ced66fb42179" Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.580951 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-p8pht"] Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.739387 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-56f8b66d48-7wr47"] Jan 30 17:16:16 crc kubenswrapper[4712]: W0130 17:16:16.782940 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod70154dd8_9d42_4a12_af9b_1be723ef892e.slice/crio-83e39abd704b4fd2a6badab202bb020c12313733ad1995a8eaa85b2d67860e22 WatchSource:0}: Error finding container 83e39abd704b4fd2a6badab202bb020c12313733ad1995a8eaa85b2d67860e22: Status 404 returned error can't find the container with id 83e39abd704b4fd2a6badab202bb020c12313733ad1995a8eaa85b2d67860e22 Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.887214 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-64655dbc44-pvj2c"] Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.957164 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"363a0f0e27bbe82bb0f65db800adefe878c8d2cbc6cca5716d011b37b1215a28"} Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.958712 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kmcjp" event={"ID":"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c","Type":"ContainerStarted","Data":"f526d490a66a83ed7181076e7eb98322fd53568262094785b44fe65d4da82b1c"} Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.991009 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"83e39abd704b4fd2a6badab202bb020c12313733ad1995a8eaa85b2d67860e22"} Jan 30 17:16:16 crc kubenswrapper[4712]: I0130 17:16:16.996208 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8pht" event={"ID":"ef70cf25-e984-4397-b60e-78199d8f41bf","Type":"ContainerStarted","Data":"af94bbecaafc1841a2b4b08248ab8b14db617c24e77a437a0d388b9ae23b35d5"} Jan 30 17:16:17 crc kubenswrapper[4712]: I0130 17:16:17.026632 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-kmcjp" podStartSLOduration=6.146249438 podStartE2EDuration="52.026610412s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="2026-01-30 17:15:27.934910904 +0000 UTC m=+1264.841920373" lastFinishedPulling="2026-01-30 17:16:13.815271878 +0000 UTC m=+1310.722281347" observedRunningTime="2026-01-30 17:16:16.97909174 +0000 UTC m=+1313.886101209" watchObservedRunningTime="2026-01-30 17:16:17.026610412 +0000 UTC m=+1313.933619871" Jan 30 17:16:17 crc kubenswrapper[4712]: I0130 17:16:17.150385 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kfjwh"] Jan 30 17:16:17 crc kubenswrapper[4712]: I0130 17:16:17.209269 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:16:17 crc kubenswrapper[4712]: I0130 17:16:17.248412 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5bcf445ccb-bcbn6"] Jan 30 17:16:17 crc kubenswrapper[4712]: I0130 17:16:17.781402 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.006458 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"751f1acbcaf74d2cd4c5d7144ce60e8025852f21f1c040dd9e42d4aaad9e5cde"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.013607 4712 generic.go:334] "Generic (PLEG): container finished" podID="c5c55ed2-b2de-42e8-865c-81436c478565" containerID="88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216" exitCode=0 Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.013696 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" event={"ID":"c5c55ed2-b2de-42e8-865c-81436c478565","Type":"ContainerDied","Data":"88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.013728 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" event={"ID":"c5c55ed2-b2de-42e8-865c-81436c478565","Type":"ContainerStarted","Data":"f67907a3c00340a6eb29a28ca3946c7601c18d536492630e181adf60c842774b"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.020260 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20ecbbdb-700e-4050-973f-bb7a19df3869","Type":"ContainerStarted","Data":"4f89820a51c504af5397f85748d65dbc037279d4bdcd1c3dbfd07e1d0658e9b0"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.023403 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"e7f65e9725996b5430c165272394642af4b0191e34340a9577ad618356814e4b"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.026259 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8pht" event={"ID":"ef70cf25-e984-4397-b60e-78199d8f41bf","Type":"ContainerStarted","Data":"ca02bb819317c75624ea19803cd6304052cb736df006dc13789eab4dbce0eeed"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.031348 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bcf445ccb-bcbn6" event={"ID":"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b","Type":"ContainerStarted","Data":"5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.031384 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bcf445ccb-bcbn6" event={"ID":"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b","Type":"ContainerStarted","Data":"88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.031394 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bcf445ccb-bcbn6" event={"ID":"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b","Type":"ContainerStarted","Data":"ecc726bb15d350b09e9766eb9fee3af8422326ba365b84fcf1c99a22b6f1af61"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.032091 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.040621 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9","Type":"ContainerStarted","Data":"0579252f9064d6cfdb6e5a88c4853124ac2989beb193c0eadbd0003c84c8e8c2"} Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.070354 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-p8pht" podStartSLOduration=28.07033743 podStartE2EDuration="28.07033743s" podCreationTimestamp="2026-01-30 17:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:18.066347314 +0000 UTC m=+1314.973356863" watchObservedRunningTime="2026-01-30 17:16:18.07033743 +0000 UTC m=+1314.977346899" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.132291 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5bcf445ccb-bcbn6" podStartSLOduration=3.132267399 podStartE2EDuration="3.132267399s" podCreationTimestamp="2026-01-30 17:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:18.088920147 +0000 UTC m=+1314.995929616" watchObservedRunningTime="2026-01-30 17:16:18.132267399 +0000 UTC m=+1315.039276868" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.310512 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-74d94b9977-8pbjf"] Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.312593 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.316048 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.317524 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.337380 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74d94b9977-8pbjf"] Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425720 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-ovndb-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425769 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-combined-ca-bundle\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425808 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-internal-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425862 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-httpd-config\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425913 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fsp\" (UniqueName: \"kubernetes.io/projected/dcb48170-513b-48ad-a97b-0612fb16c386-kube-api-access-j5fsp\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425929 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-public-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.425947 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-config\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527528 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-internal-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527620 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-httpd-config\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527682 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5fsp\" (UniqueName: \"kubernetes.io/projected/dcb48170-513b-48ad-a97b-0612fb16c386-kube-api-access-j5fsp\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527704 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-public-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527723 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-config\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527773 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-ovndb-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.527812 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-combined-ca-bundle\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.547146 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-ovndb-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.555704 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-config\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.558351 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-httpd-config\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.558383 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-public-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.558916 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-internal-tls-certs\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.559412 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-combined-ca-bundle\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.561839 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5fsp\" (UniqueName: \"kubernetes.io/projected/dcb48170-513b-48ad-a97b-0612fb16c386-kube-api-access-j5fsp\") pod \"neutron-74d94b9977-8pbjf\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:18 crc kubenswrapper[4712]: I0130 17:16:18.654779 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.062397 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"0637c6cf8b9543ce9d09aa9b237dd18cd14c4de10f84d30d44b4a331a3589fa8"} Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.073056 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" event={"ID":"c5c55ed2-b2de-42e8-865c-81436c478565","Type":"ContainerStarted","Data":"b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0"} Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.073138 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.105639 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-64655dbc44-pvj2c" podStartSLOduration=44.504627181000004 podStartE2EDuration="45.105623987s" podCreationTimestamp="2026-01-30 17:15:34 +0000 UTC" firstStartedPulling="2026-01-30 17:16:16.878185554 +0000 UTC m=+1313.785195023" lastFinishedPulling="2026-01-30 17:16:17.47918236 +0000 UTC m=+1314.386191829" observedRunningTime="2026-01-30 17:16:19.100601226 +0000 UTC m=+1316.007610695" watchObservedRunningTime="2026-01-30 17:16:19.105623987 +0000 UTC m=+1316.012633456" Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.113013 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20ecbbdb-700e-4050-973f-bb7a19df3869","Type":"ContainerStarted","Data":"0b65a919d5fd7848033183bdcaf4c9c29a02c8eb77e4d57633089c649a534089"} Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.115774 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"ca8d05a9668753b2823d10544b8f8bbf3f28554634a29614ced82a2e411f15e2"} Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.122427 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" podStartSLOduration=4.12241087 podStartE2EDuration="4.12241087s" podCreationTimestamp="2026-01-30 17:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:19.120817841 +0000 UTC m=+1316.027827310" watchObservedRunningTime="2026-01-30 17:16:19.12241087 +0000 UTC m=+1316.029420339" Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.134091 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9","Type":"ContainerStarted","Data":"55c7779ed294aab7b328c07c7eb3bab66291697e1db3139b1953c930c941b9fa"} Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.153566 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-56f8b66d48-7wr47" podStartSLOduration=44.555444862 podStartE2EDuration="45.153545518s" podCreationTimestamp="2026-01-30 17:15:34 +0000 UTC" firstStartedPulling="2026-01-30 17:16:16.878590334 +0000 UTC m=+1313.785599813" lastFinishedPulling="2026-01-30 17:16:17.47669101 +0000 UTC m=+1314.383700469" observedRunningTime="2026-01-30 17:16:19.145272709 +0000 UTC m=+1316.052282178" watchObservedRunningTime="2026-01-30 17:16:19.153545518 +0000 UTC m=+1316.060554997" Jan 30 17:16:19 crc kubenswrapper[4712]: I0130 17:16:19.444757 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-74d94b9977-8pbjf"] Jan 30 17:16:19 crc kubenswrapper[4712]: W0130 17:16:19.459744 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddcb48170_513b_48ad_a97b_0612fb16c386.slice/crio-5ed0b14a7ed25deaca84f26bbe94060f70bf322695a4c3c239675397b94feec2 WatchSource:0}: Error finding container 5ed0b14a7ed25deaca84f26bbe94060f70bf322695a4c3c239675397b94feec2: Status 404 returned error can't find the container with id 5ed0b14a7ed25deaca84f26bbe94060f70bf322695a4c3c239675397b94feec2 Jan 30 17:16:20 crc kubenswrapper[4712]: I0130 17:16:20.167246 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9","Type":"ContainerStarted","Data":"17d9748dfc29f0d93829a519d709a6dc54f713414c4b13f981fee1b67535dad9"} Jan 30 17:16:20 crc kubenswrapper[4712]: I0130 17:16:20.194459 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94b9977-8pbjf" event={"ID":"dcb48170-513b-48ad-a97b-0612fb16c386","Type":"ContainerStarted","Data":"f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634"} Jan 30 17:16:20 crc kubenswrapper[4712]: I0130 17:16:20.194519 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94b9977-8pbjf" event={"ID":"dcb48170-513b-48ad-a97b-0612fb16c386","Type":"ContainerStarted","Data":"5ed0b14a7ed25deaca84f26bbe94060f70bf322695a4c3c239675397b94feec2"} Jan 30 17:16:21 crc kubenswrapper[4712]: I0130 17:16:21.206613 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20ecbbdb-700e-4050-973f-bb7a19df3869","Type":"ContainerStarted","Data":"65a02972203ca016739170292f0a75267baec64abb325f576c51718e5475b326"} Jan 30 17:16:21 crc kubenswrapper[4712]: I0130 17:16:21.213276 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94b9977-8pbjf" event={"ID":"dcb48170-513b-48ad-a97b-0612fb16c386","Type":"ContainerStarted","Data":"715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996"} Jan 30 17:16:21 crc kubenswrapper[4712]: I0130 17:16:21.240904 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.24078409 podStartE2EDuration="38.24078409s" podCreationTimestamp="2026-01-30 17:15:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:21.227947752 +0000 UTC m=+1318.134957241" watchObservedRunningTime="2026-01-30 17:16:21.24078409 +0000 UTC m=+1318.147793559" Jan 30 17:16:21 crc kubenswrapper[4712]: I0130 17:16:21.253992 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=32.253971408 podStartE2EDuration="32.253971408s" podCreationTimestamp="2026-01-30 17:15:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:21.251184961 +0000 UTC m=+1318.158194440" watchObservedRunningTime="2026-01-30 17:16:21.253971408 +0000 UTC m=+1318.160980877" Jan 30 17:16:22 crc kubenswrapper[4712]: I0130 17:16:22.224283 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:23 crc kubenswrapper[4712]: I0130 17:16:23.232404 4712 generic.go:334] "Generic (PLEG): container finished" podID="540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" containerID="f526d490a66a83ed7181076e7eb98322fd53568262094785b44fe65d4da82b1c" exitCode=0 Jan 30 17:16:23 crc kubenswrapper[4712]: I0130 17:16:23.232477 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kmcjp" event={"ID":"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c","Type":"ContainerDied","Data":"f526d490a66a83ed7181076e7eb98322fd53568262094785b44fe65d4da82b1c"} Jan 30 17:16:23 crc kubenswrapper[4712]: I0130 17:16:23.254234 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-74d94b9977-8pbjf" podStartSLOduration=5.254218609 podStartE2EDuration="5.254218609s" podCreationTimestamp="2026-01-30 17:16:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:21.275211508 +0000 UTC m=+1318.182220977" watchObservedRunningTime="2026-01-30 17:16:23.254218609 +0000 UTC m=+1320.161228068" Jan 30 17:16:24 crc kubenswrapper[4712]: I0130 17:16:24.552160 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:24 crc kubenswrapper[4712]: I0130 17:16:24.552199 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:24 crc kubenswrapper[4712]: I0130 17:16:24.610705 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:24 crc kubenswrapper[4712]: I0130 17:16:24.626147 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.072846 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.073958 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.248585 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.248621 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.353286 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.353333 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.582037 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.664866 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qkp54"] Jan 30 17:16:25 crc kubenswrapper[4712]: I0130 17:16:25.665190 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="dnsmasq-dns" containerID="cri-o://1ec3b768e458d6b99a2c9cd178dc132b8c34ef0df5e04ba1182fb0f0843f9d07" gracePeriod=10 Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.048298 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kmcjp" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.165751 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-scripts\") pod \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.165822 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-combined-ca-bundle\") pod \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.165885 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-config-data\") pod \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.165965 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-logs\") pod \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.166004 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mkvtw\" (UniqueName: \"kubernetes.io/projected/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-kube-api-access-mkvtw\") pod \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\" (UID: \"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.166400 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-logs" (OuterVolumeSpecName: "logs") pod "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" (UID: "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.171405 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-scripts" (OuterVolumeSpecName: "scripts") pod "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" (UID: "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.173153 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-kube-api-access-mkvtw" (OuterVolumeSpecName: "kube-api-access-mkvtw") pod "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" (UID: "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c"). InnerVolumeSpecName "kube-api-access-mkvtw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.193979 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-config-data" (OuterVolumeSpecName: "config-data") pod "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" (UID: "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.194463 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" (UID: "540ab89b-e7b1-4c3f-ad6d-535ecaa5870c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.256722 4712 generic.go:334] "Generic (PLEG): container finished" podID="d421f208-6974-48b9-9d8d-abe468e07c18" containerID="1ec3b768e458d6b99a2c9cd178dc132b8c34ef0df5e04ba1182fb0f0843f9d07" exitCode=0 Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.256821 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" event={"ID":"d421f208-6974-48b9-9d8d-abe468e07c18","Type":"ContainerDied","Data":"1ec3b768e458d6b99a2c9cd178dc132b8c34ef0df5e04ba1182fb0f0843f9d07"} Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.259495 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-kmcjp" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.259600 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-kmcjp" event={"ID":"540ab89b-e7b1-4c3f-ad6d-535ecaa5870c","Type":"ContainerDied","Data":"122970a8f49c277106e2d38de8082f91a67f0ca52f3f0c91f6a6e1861a371c5d"} Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.260199 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="122970a8f49c277106e2d38de8082f91a67f0ca52f3f0c91f6a6e1861a371c5d" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.267582 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.267604 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.267616 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.267626 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.267633 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mkvtw\" (UniqueName: \"kubernetes.io/projected/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c-kube-api-access-mkvtw\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.685168 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.789495 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-sb\") pod \"d421f208-6974-48b9-9d8d-abe468e07c18\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.789908 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-swift-storage-0\") pod \"d421f208-6974-48b9-9d8d-abe468e07c18\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.790020 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-nb\") pod \"d421f208-6974-48b9-9d8d-abe468e07c18\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.790103 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-svc\") pod \"d421f208-6974-48b9-9d8d-abe468e07c18\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.790326 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-config\") pod \"d421f208-6974-48b9-9d8d-abe468e07c18\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.790452 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tmkh8\" (UniqueName: \"kubernetes.io/projected/d421f208-6974-48b9-9d8d-abe468e07c18-kube-api-access-tmkh8\") pod \"d421f208-6974-48b9-9d8d-abe468e07c18\" (UID: \"d421f208-6974-48b9-9d8d-abe468e07c18\") " Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.795041 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d421f208-6974-48b9-9d8d-abe468e07c18-kube-api-access-tmkh8" (OuterVolumeSpecName: "kube-api-access-tmkh8") pod "d421f208-6974-48b9-9d8d-abe468e07c18" (UID: "d421f208-6974-48b9-9d8d-abe468e07c18"). InnerVolumeSpecName "kube-api-access-tmkh8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.874128 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "d421f208-6974-48b9-9d8d-abe468e07c18" (UID: "d421f208-6974-48b9-9d8d-abe468e07c18"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.892758 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.892785 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tmkh8\" (UniqueName: \"kubernetes.io/projected/d421f208-6974-48b9-9d8d-abe468e07c18-kube-api-access-tmkh8\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.918339 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "d421f208-6974-48b9-9d8d-abe468e07c18" (UID: "d421f208-6974-48b9-9d8d-abe468e07c18"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.926979 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-config" (OuterVolumeSpecName: "config") pod "d421f208-6974-48b9-9d8d-abe468e07c18" (UID: "d421f208-6974-48b9-9d8d-abe468e07c18"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.930129 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d421f208-6974-48b9-9d8d-abe468e07c18" (UID: "d421f208-6974-48b9-9d8d-abe468e07c18"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.940922 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "d421f208-6974-48b9-9d8d-abe468e07c18" (UID: "d421f208-6974-48b9-9d8d-abe468e07c18"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.993859 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.994061 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.994163 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:26 crc kubenswrapper[4712]: I0130 17:16:26.994239 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421f208-6974-48b9-9d8d-abe468e07c18-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.171407 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6dd7989794-hdn5g"] Jan 30 17:16:27 crc kubenswrapper[4712]: E0130 17:16:27.172086 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" containerName="placement-db-sync" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.172203 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" containerName="placement-db-sync" Jan 30 17:16:27 crc kubenswrapper[4712]: E0130 17:16:27.172307 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="dnsmasq-dns" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.172370 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="dnsmasq-dns" Jan 30 17:16:27 crc kubenswrapper[4712]: E0130 17:16:27.172445 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="init" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.172518 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="init" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.172817 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="dnsmasq-dns" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.172934 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" containerName="placement-db-sync" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.174199 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.180180 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dd7989794-hdn5g"] Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.181112 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fsdvc" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.181343 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.181458 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.182854 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.185669 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.268222 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7krdw" event={"ID":"6c4a03a4-e80d-4605-990f-a242222558bb","Type":"ContainerStarted","Data":"781f6a5a40a5b3ee9028c8dbd3c9194eaa45a80c3b80beec710f1fe06b502320"} Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.271235 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" event={"ID":"d421f208-6974-48b9-9d8d-abe468e07c18","Type":"ContainerDied","Data":"85edf15f372952c6b3457892574fa91a206384e26f3013b471f848e2d4a89f01"} Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.271293 4712 scope.go:117] "RemoveContainer" containerID="1ec3b768e458d6b99a2c9cd178dc132b8c34ef0df5e04ba1182fb0f0843f9d07" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.271516 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.276467 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3896ac30-4d2d-4bc2-bfc3-4352d7d586de","Type":"ContainerStarted","Data":"447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39"} Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.295982 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-7krdw" podStartSLOduration=3.824297492 podStartE2EDuration="1m2.295963633s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="2026-01-30 17:15:28.185111698 +0000 UTC m=+1265.092121157" lastFinishedPulling="2026-01-30 17:16:26.656777829 +0000 UTC m=+1323.563787298" observedRunningTime="2026-01-30 17:16:27.294906067 +0000 UTC m=+1324.201915536" watchObservedRunningTime="2026-01-30 17:16:27.295963633 +0000 UTC m=+1324.202973102" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300336 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-internal-tls-certs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300389 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5wk2\" (UniqueName: \"kubernetes.io/projected/f107ebd6-3359-4995-9a79-70e9719bbbf2-kube-api-access-h5wk2\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300431 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-combined-ca-bundle\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300461 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-public-tls-certs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300487 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-scripts\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300505 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-config-data\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.300528 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f107ebd6-3359-4995-9a79-70e9719bbbf2-logs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.320434 4712 scope.go:117] "RemoveContainer" containerID="5a9677765a021b2ac0bb10f374fc8885b1893e6c3633071e44eb831583f8d8f5" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.345048 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qkp54"] Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.356747 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-785d8bcb8c-qkp54"] Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.401999 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-internal-tls-certs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.402075 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5wk2\" (UniqueName: \"kubernetes.io/projected/f107ebd6-3359-4995-9a79-70e9719bbbf2-kube-api-access-h5wk2\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.402140 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-combined-ca-bundle\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.402175 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-public-tls-certs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.402204 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-scripts\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.402221 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-config-data\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.402245 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f107ebd6-3359-4995-9a79-70e9719bbbf2-logs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.403833 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f107ebd6-3359-4995-9a79-70e9719bbbf2-logs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.409636 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-internal-tls-certs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.411665 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-public-tls-certs\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.414319 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-combined-ca-bundle\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.414736 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-config-data\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.425180 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5wk2\" (UniqueName: \"kubernetes.io/projected/f107ebd6-3359-4995-9a79-70e9719bbbf2-kube-api-access-h5wk2\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.437033 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-scripts\") pod \"placement-6dd7989794-hdn5g\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.495099 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:27 crc kubenswrapper[4712]: I0130 17:16:27.816813 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" path="/var/lib/kubelet/pods/d421f208-6974-48b9-9d8d-abe468e07c18/volumes" Jan 30 17:16:27 crc kubenswrapper[4712]: W0130 17:16:27.997433 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf107ebd6_3359_4995_9a79_70e9719bbbf2.slice/crio-8dc502d4452777f284e6464a482bce3bccccb7a6ccce3c7d75700c4e3c9ca403 WatchSource:0}: Error finding container 8dc502d4452777f284e6464a482bce3bccccb7a6ccce3c7d75700c4e3c9ca403: Status 404 returned error can't find the container with id 8dc502d4452777f284e6464a482bce3bccccb7a6ccce3c7d75700c4e3c9ca403 Jan 30 17:16:28 crc kubenswrapper[4712]: I0130 17:16:28.004393 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6dd7989794-hdn5g"] Jan 30 17:16:28 crc kubenswrapper[4712]: I0130 17:16:28.287292 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dd7989794-hdn5g" event={"ID":"f107ebd6-3359-4995-9a79-70e9719bbbf2","Type":"ContainerStarted","Data":"78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9"} Jan 30 17:16:28 crc kubenswrapper[4712]: I0130 17:16:28.287342 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dd7989794-hdn5g" event={"ID":"f107ebd6-3359-4995-9a79-70e9719bbbf2","Type":"ContainerStarted","Data":"8dc502d4452777f284e6464a482bce3bccccb7a6ccce3c7d75700c4e3c9ca403"} Jan 30 17:16:28 crc kubenswrapper[4712]: I0130 17:16:28.289237 4712 generic.go:334] "Generic (PLEG): container finished" podID="ef70cf25-e984-4397-b60e-78199d8f41bf" containerID="ca02bb819317c75624ea19803cd6304052cb736df006dc13789eab4dbce0eeed" exitCode=0 Jan 30 17:16:28 crc kubenswrapper[4712]: I0130 17:16:28.289317 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8pht" event={"ID":"ef70cf25-e984-4397-b60e-78199d8f41bf","Type":"ContainerDied","Data":"ca02bb819317c75624ea19803cd6304052cb736df006dc13789eab4dbce0eeed"} Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.312915 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dd7989794-hdn5g" event={"ID":"f107ebd6-3359-4995-9a79-70e9719bbbf2","Type":"ContainerStarted","Data":"403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf"} Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.313253 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.313273 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.338399 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6dd7989794-hdn5g" podStartSLOduration=2.338383348 podStartE2EDuration="2.338383348s" podCreationTimestamp="2026-01-30 17:16:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:29.337526577 +0000 UTC m=+1326.244536056" watchObservedRunningTime="2026-01-30 17:16:29.338383348 +0000 UTC m=+1326.245392817" Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.938502 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.980315 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-combined-ca-bundle\") pod \"ef70cf25-e984-4397-b60e-78199d8f41bf\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.980420 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n58wf\" (UniqueName: \"kubernetes.io/projected/ef70cf25-e984-4397-b60e-78199d8f41bf-kube-api-access-n58wf\") pod \"ef70cf25-e984-4397-b60e-78199d8f41bf\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.980454 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-config-data\") pod \"ef70cf25-e984-4397-b60e-78199d8f41bf\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.980472 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-credential-keys\") pod \"ef70cf25-e984-4397-b60e-78199d8f41bf\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.980542 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-fernet-keys\") pod \"ef70cf25-e984-4397-b60e-78199d8f41bf\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " Jan 30 17:16:29 crc kubenswrapper[4712]: I0130 17:16:29.980572 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-scripts\") pod \"ef70cf25-e984-4397-b60e-78199d8f41bf\" (UID: \"ef70cf25-e984-4397-b60e-78199d8f41bf\") " Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.033186 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "ef70cf25-e984-4397-b60e-78199d8f41bf" (UID: "ef70cf25-e984-4397-b60e-78199d8f41bf"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.033301 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef70cf25-e984-4397-b60e-78199d8f41bf-kube-api-access-n58wf" (OuterVolumeSpecName: "kube-api-access-n58wf") pod "ef70cf25-e984-4397-b60e-78199d8f41bf" (UID: "ef70cf25-e984-4397-b60e-78199d8f41bf"). InnerVolumeSpecName "kube-api-access-n58wf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.034978 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-scripts" (OuterVolumeSpecName: "scripts") pod "ef70cf25-e984-4397-b60e-78199d8f41bf" (UID: "ef70cf25-e984-4397-b60e-78199d8f41bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.035025 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "ef70cf25-e984-4397-b60e-78199d8f41bf" (UID: "ef70cf25-e984-4397-b60e-78199d8f41bf"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.039462 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-config-data" (OuterVolumeSpecName: "config-data") pod "ef70cf25-e984-4397-b60e-78199d8f41bf" (UID: "ef70cf25-e984-4397-b60e-78199d8f41bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.043808 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef70cf25-e984-4397-b60e-78199d8f41bf" (UID: "ef70cf25-e984-4397-b60e-78199d8f41bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.057987 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.058034 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.083992 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.084020 4712 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.084033 4712 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.084043 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.084054 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef70cf25-e984-4397-b60e-78199d8f41bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.084064 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n58wf\" (UniqueName: \"kubernetes.io/projected/ef70cf25-e984-4397-b60e-78199d8f41bf-kube-api-access-n58wf\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.131829 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.155128 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.327377 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-p8pht" event={"ID":"ef70cf25-e984-4397-b60e-78199d8f41bf","Type":"ContainerDied","Data":"af94bbecaafc1841a2b4b08248ab8b14db617c24e77a437a0d388b9ae23b35d5"} Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.327433 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af94bbecaafc1841a2b4b08248ab8b14db617c24e77a437a0d388b9ae23b35d5" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.328104 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.328150 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.329695 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-p8pht" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.436336 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-7f4784f4d6-zvlhq"] Jan 30 17:16:30 crc kubenswrapper[4712]: E0130 17:16:30.436943 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef70cf25-e984-4397-b60e-78199d8f41bf" containerName="keystone-bootstrap" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.436960 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef70cf25-e984-4397-b60e-78199d8f41bf" containerName="keystone-bootstrap" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.437184 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef70cf25-e984-4397-b60e-78199d8f41bf" containerName="keystone-bootstrap" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.437711 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.443597 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.443691 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.443857 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-dxmtz" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.443968 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.446216 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.446396 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.474398 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f4784f4d6-zvlhq"] Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596698 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-credential-keys\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596738 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-internal-tls-certs\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596782 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-fernet-keys\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596821 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-public-tls-certs\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596849 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-config-data\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596886 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-scripts\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596934 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85dn\" (UniqueName: \"kubernetes.io/projected/49aa464e-03ee-4970-bbf8-552e07904ea0-kube-api-access-l85dn\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.596966 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-combined-ca-bundle\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.698305 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-fernet-keys\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.698377 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-public-tls-certs\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.699192 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-config-data\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.699258 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-scripts\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.699330 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l85dn\" (UniqueName: \"kubernetes.io/projected/49aa464e-03ee-4970-bbf8-552e07904ea0-kube-api-access-l85dn\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.699366 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-combined-ca-bundle\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.699417 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-credential-keys\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.699436 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-internal-tls-certs\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.713913 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-credential-keys\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.714325 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-scripts\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.714834 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-config-data\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.715533 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-public-tls-certs\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.716548 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-fernet-keys\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.717009 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-internal-tls-certs\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.724298 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l85dn\" (UniqueName: \"kubernetes.io/projected/49aa464e-03ee-4970-bbf8-552e07904ea0-kube-api-access-l85dn\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.724969 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49aa464e-03ee-4970-bbf8-552e07904ea0-combined-ca-bundle\") pod \"keystone-7f4784f4d6-zvlhq\" (UID: \"49aa464e-03ee-4970-bbf8-552e07904ea0\") " pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:30 crc kubenswrapper[4712]: I0130 17:16:30.757655 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.224419 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6ddfd55656-dc4w7"] Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.226045 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.268142 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6ddfd55656-dc4w7"] Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311376 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7hgl\" (UniqueName: \"kubernetes.io/projected/c8347a30-317c-4035-abc4-b03700578363-kube-api-access-j7hgl\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311476 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-combined-ca-bundle\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311497 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8347a30-317c-4035-abc4-b03700578363-logs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311522 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-scripts\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311551 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-config-data\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311597 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-public-tls-certs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.311636 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-internal-tls-certs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.360682 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-78jqx" event={"ID":"2ef9729d-cbbc-4354-98e4-a9e07651518e","Type":"ContainerStarted","Data":"e264b53f3868c5a390c29891442008f13f5c8c52760ff372f2898b802d090802"} Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.383346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9gcv2" event={"ID":"3c24ed25-f06f-494d-9fd5-2077c052db31","Type":"ContainerStarted","Data":"7044a23a75fa9d1cbe45ab912580d60ec45c452b219704a72a61230af590edd6"} Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.399163 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-78jqx" podStartSLOduration=4.456745796 podStartE2EDuration="1m6.399147164s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="2026-01-30 17:15:27.877054253 +0000 UTC m=+1264.784063712" lastFinishedPulling="2026-01-30 17:16:29.819455611 +0000 UTC m=+1326.726465080" observedRunningTime="2026-01-30 17:16:31.396316446 +0000 UTC m=+1328.303325915" watchObservedRunningTime="2026-01-30 17:16:31.399147164 +0000 UTC m=+1328.306156633" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416284 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-internal-tls-certs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416350 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7hgl\" (UniqueName: \"kubernetes.io/projected/c8347a30-317c-4035-abc4-b03700578363-kube-api-access-j7hgl\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416432 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-combined-ca-bundle\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416455 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8347a30-317c-4035-abc4-b03700578363-logs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416476 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-scripts\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416523 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-config-data\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.416582 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-public-tls-certs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.423393 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-7f4784f4d6-zvlhq"] Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.426726 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c8347a30-317c-4035-abc4-b03700578363-logs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.427766 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-public-tls-certs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.428550 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-combined-ca-bundle\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.449141 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-internal-tls-certs\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.450099 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-config-data\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.453399 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-9gcv2" podStartSLOduration=3.876062697 podStartE2EDuration="1m6.453389287s" podCreationTimestamp="2026-01-30 17:15:25 +0000 UTC" firstStartedPulling="2026-01-30 17:15:27.243497624 +0000 UTC m=+1264.150507093" lastFinishedPulling="2026-01-30 17:16:29.820824214 +0000 UTC m=+1326.727833683" observedRunningTime="2026-01-30 17:16:31.433193592 +0000 UTC m=+1328.340203211" watchObservedRunningTime="2026-01-30 17:16:31.453389287 +0000 UTC m=+1328.360398746" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.453786 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c8347a30-317c-4035-abc4-b03700578363-scripts\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.466093 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7hgl\" (UniqueName: \"kubernetes.io/projected/c8347a30-317c-4035-abc4-b03700578363-kube-api-access-j7hgl\") pod \"placement-6ddfd55656-dc4w7\" (UID: \"c8347a30-317c-4035-abc4-b03700578363\") " pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.477440 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-785d8bcb8c-qkp54" podUID="d421f208-6974-48b9-9d8d-abe468e07c18" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.149:5353: i/o timeout" Jan 30 17:16:31 crc kubenswrapper[4712]: W0130 17:16:31.534968 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49aa464e_03ee_4970_bbf8_552e07904ea0.slice/crio-86b788633d61e51d24ae2faa6549f8902227efa446b1ad57cad8e8ba174ddce7 WatchSource:0}: Error finding container 86b788633d61e51d24ae2faa6549f8902227efa446b1ad57cad8e8ba174ddce7: Status 404 returned error can't find the container with id 86b788633d61e51d24ae2faa6549f8902227efa446b1ad57cad8e8ba174ddce7 Jan 30 17:16:31 crc kubenswrapper[4712]: I0130 17:16:31.576177 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:32 crc kubenswrapper[4712]: I0130 17:16:32.341902 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6ddfd55656-dc4w7"] Jan 30 17:16:32 crc kubenswrapper[4712]: I0130 17:16:32.416001 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ddfd55656-dc4w7" event={"ID":"c8347a30-317c-4035-abc4-b03700578363","Type":"ContainerStarted","Data":"6622e2f346140ca808a62cfa877bc70a6ea8ead9d06dcb41d5e2ca2bb1c80da3"} Jan 30 17:16:32 crc kubenswrapper[4712]: I0130 17:16:32.419196 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f4784f4d6-zvlhq" event={"ID":"49aa464e-03ee-4970-bbf8-552e07904ea0","Type":"ContainerStarted","Data":"1a51d6a522073aeeb987b776a6155d915628ad5c542fb84154e6a5cdec45b8a4"} Jan 30 17:16:32 crc kubenswrapper[4712]: I0130 17:16:32.419228 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-7f4784f4d6-zvlhq" event={"ID":"49aa464e-03ee-4970-bbf8-552e07904ea0","Type":"ContainerStarted","Data":"86b788633d61e51d24ae2faa6549f8902227efa446b1ad57cad8e8ba174ddce7"} Jan 30 17:16:32 crc kubenswrapper[4712]: I0130 17:16:32.420357 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:16:32 crc kubenswrapper[4712]: I0130 17:16:32.460932 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-7f4784f4d6-zvlhq" podStartSLOduration=2.460912526 podStartE2EDuration="2.460912526s" podCreationTimestamp="2026-01-30 17:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:32.447097874 +0000 UTC m=+1329.354107343" watchObservedRunningTime="2026-01-30 17:16:32.460912526 +0000 UTC m=+1329.367921995" Jan 30 17:16:33 crc kubenswrapper[4712]: I0130 17:16:33.431105 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ddfd55656-dc4w7" event={"ID":"c8347a30-317c-4035-abc4-b03700578363","Type":"ContainerStarted","Data":"fb21c966bef8fb4c05e8c7a6b6556b0a68dae86bb14d7284fce4751c1dde4794"} Jan 30 17:16:34 crc kubenswrapper[4712]: I0130 17:16:34.449071 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6ddfd55656-dc4w7" event={"ID":"c8347a30-317c-4035-abc4-b03700578363","Type":"ContainerStarted","Data":"261b509ba13aea83a226ed703836145471b60127b7357f7697f21b9f6714e8a2"} Jan 30 17:16:34 crc kubenswrapper[4712]: I0130 17:16:34.450074 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:34 crc kubenswrapper[4712]: I0130 17:16:34.450158 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:16:34 crc kubenswrapper[4712]: I0130 17:16:34.475209 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6ddfd55656-dc4w7" podStartSLOduration=3.4751689150000002 podStartE2EDuration="3.475168915s" podCreationTimestamp="2026-01-30 17:16:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:34.47288504 +0000 UTC m=+1331.379894509" watchObservedRunningTime="2026-01-30 17:16:34.475168915 +0000 UTC m=+1331.382178384" Jan 30 17:16:35 crc kubenswrapper[4712]: I0130 17:16:35.076221 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:16:35 crc kubenswrapper[4712]: I0130 17:16:35.355304 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:16:36 crc kubenswrapper[4712]: I0130 17:16:36.581721 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:36 crc kubenswrapper[4712]: I0130 17:16:36.581864 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:16:36 crc kubenswrapper[4712]: I0130 17:16:36.582233 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:16:36 crc kubenswrapper[4712]: I0130 17:16:36.591984 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:16:36 crc kubenswrapper[4712]: I0130 17:16:36.592086 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:16:36 crc kubenswrapper[4712]: I0130 17:16:36.599277 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:16:39 crc kubenswrapper[4712]: I0130 17:16:39.494518 4712 generic.go:334] "Generic (PLEG): container finished" podID="6c4a03a4-e80d-4605-990f-a242222558bb" containerID="781f6a5a40a5b3ee9028c8dbd3c9194eaa45a80c3b80beec710f1fe06b502320" exitCode=0 Jan 30 17:16:39 crc kubenswrapper[4712]: I0130 17:16:39.494607 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7krdw" event={"ID":"6c4a03a4-e80d-4605-990f-a242222558bb","Type":"ContainerDied","Data":"781f6a5a40a5b3ee9028c8dbd3c9194eaa45a80c3b80beec710f1fe06b502320"} Jan 30 17:16:43 crc kubenswrapper[4712]: E0130 17:16:43.357995 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/ubi9/httpd-24:latest" Jan 30 17:16:43 crc kubenswrapper[4712]: E0130 17:16:43.359271 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:proxy-httpd,Image:registry.redhat.io/ubi9/httpd-24:latest,Command:[/usr/sbin/httpd],Args:[-DFOREGROUND],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:proxy-httpd,HostPort:0,ContainerPort:3000,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf/httpd.conf,SubPath:httpd.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/httpd/conf.d/ssl.conf,SubPath:ssl.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:run-httpd,ReadOnly:false,MountPath:/run/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:log-httpd,ReadOnly:false,MountPath:/var/log/httpd,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-62p8f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/,Port:{0 3000 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:30,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(3896ac30-4d2d-4bc2-bfc3-4352d7d586de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 17:16:43 crc kubenswrapper[4712]: E0130 17:16:43.360814 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"ceilometer-notification-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"proxy-httpd\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.458985 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7krdw" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.542926 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" containerName="sg-core" containerID="cri-o://447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39" gracePeriod=30 Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.543287 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-7krdw" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.543465 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-7krdw" event={"ID":"6c4a03a4-e80d-4605-990f-a242222558bb","Type":"ContainerDied","Data":"a77a5067d918b2848bd999323d3cfee6d37f589bfa32b87139e2566f40ee54f8"} Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.543526 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a77a5067d918b2848bd999323d3cfee6d37f589bfa32b87139e2566f40ee54f8" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.597973 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-combined-ca-bundle\") pod \"6c4a03a4-e80d-4605-990f-a242222558bb\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.598167 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjvf\" (UniqueName: \"kubernetes.io/projected/6c4a03a4-e80d-4605-990f-a242222558bb-kube-api-access-7jjvf\") pod \"6c4a03a4-e80d-4605-990f-a242222558bb\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.598255 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-db-sync-config-data\") pod \"6c4a03a4-e80d-4605-990f-a242222558bb\" (UID: \"6c4a03a4-e80d-4605-990f-a242222558bb\") " Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.622002 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c4a03a4-e80d-4605-990f-a242222558bb-kube-api-access-7jjvf" (OuterVolumeSpecName: "kube-api-access-7jjvf") pod "6c4a03a4-e80d-4605-990f-a242222558bb" (UID: "6c4a03a4-e80d-4605-990f-a242222558bb"). InnerVolumeSpecName "kube-api-access-7jjvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.640999 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "6c4a03a4-e80d-4605-990f-a242222558bb" (UID: "6c4a03a4-e80d-4605-990f-a242222558bb"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.700912 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jjvf\" (UniqueName: \"kubernetes.io/projected/6c4a03a4-e80d-4605-990f-a242222558bb-kube-api-access-7jjvf\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.700951 4712 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.723946 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c4a03a4-e80d-4605-990f-a242222558bb" (UID: "6c4a03a4-e80d-4605-990f-a242222558bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:43 crc kubenswrapper[4712]: I0130 17:16:43.806118 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c4a03a4-e80d-4605-990f-a242222558bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.573104 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.579343 4712 generic.go:334] "Generic (PLEG): container finished" podID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" containerID="447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39" exitCode=2 Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.579392 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3896ac30-4d2d-4bc2-bfc3-4352d7d586de","Type":"ContainerDied","Data":"447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39"} Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.579423 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3896ac30-4d2d-4bc2-bfc3-4352d7d586de","Type":"ContainerDied","Data":"f87271a1a9744e3e125a9c67363dc26d375e70eed36f1643b82f7545ef74d1a4"} Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.579440 4712 scope.go:117] "RemoveContainer" containerID="447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.606632 4712 scope.go:117] "RemoveContainer" containerID="447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39" Jan 30 17:16:44 crc kubenswrapper[4712]: E0130 17:16:44.607555 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39\": container with ID starting with 447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39 not found: ID does not exist" containerID="447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.607603 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39"} err="failed to get container status \"447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39\": rpc error: code = NotFound desc = could not find container \"447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39\": container with ID starting with 447588efa01542b5e3b21541af532dd0c5c1eda26dca0c7dbd0a6efee2291c39 not found: ID does not exist" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.628896 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-scripts\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.628972 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62p8f\" (UniqueName: \"kubernetes.io/projected/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-kube-api-access-62p8f\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.629088 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-combined-ca-bundle\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.629185 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-run-httpd\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.629223 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-config-data\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.629262 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-log-httpd\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.629480 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-sg-core-conf-yaml\") pod \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\" (UID: \"3896ac30-4d2d-4bc2-bfc3-4352d7d586de\") " Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.630216 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.630553 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.647299 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.650088 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-kube-api-access-62p8f" (OuterVolumeSpecName: "kube-api-access-62p8f") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "kube-api-access-62p8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.651950 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-scripts" (OuterVolumeSpecName: "scripts") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.660963 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-config-data" (OuterVolumeSpecName: "config-data") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.683637 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3896ac30-4d2d-4bc2-bfc3-4352d7d586de" (UID: "3896ac30-4d2d-4bc2-bfc3-4352d7d586de"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735625 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735660 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62p8f\" (UniqueName: \"kubernetes.io/projected/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-kube-api-access-62p8f\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735673 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735683 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735691 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735701 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.735709 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3896ac30-4d2d-4bc2-bfc3-4352d7d586de-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.893589 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-75b8cdc675-hwkng"] Jan 30 17:16:44 crc kubenswrapper[4712]: E0130 17:16:44.894306 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c4a03a4-e80d-4605-990f-a242222558bb" containerName="barbican-db-sync" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.894324 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c4a03a4-e80d-4605-990f-a242222558bb" containerName="barbican-db-sync" Jan 30 17:16:44 crc kubenswrapper[4712]: E0130 17:16:44.894354 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" containerName="sg-core" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.894360 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" containerName="sg-core" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.894524 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c4a03a4-e80d-4605-990f-a242222558bb" containerName="barbican-db-sync" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.894560 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" containerName="sg-core" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.895444 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.904763 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.904970 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.905057 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-nhqcz" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.930408 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75b8cdc675-hwkng"] Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.958426 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-57c9fd48b-mnwmt"] Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.960376 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.968668 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 30 17:16:44 crc kubenswrapper[4712]: I0130 17:16:44.979181 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57c9fd48b-mnwmt"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049374 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-combined-ca-bundle\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049456 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlfmx\" (UniqueName: \"kubernetes.io/projected/7441ba42-3158-40d9-9a91-467fef6769cd-kube-api-access-mlfmx\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049511 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzzgp\" (UniqueName: \"kubernetes.io/projected/49bb97a8-9dba-4ebf-9196-812577411892-kube-api-access-xzzgp\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049556 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-combined-ca-bundle\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049577 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-config-data-custom\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049605 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-config-data\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049625 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bb97a8-9dba-4ebf-9196-812577411892-logs\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049644 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7441ba42-3158-40d9-9a91-467fef6769cd-logs\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049669 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-config-data-custom\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.049736 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-config-data\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.062518 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vbbkf"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.063934 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.072688 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.124899 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vbbkf"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.151608 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.151883 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mlfmx\" (UniqueName: \"kubernetes.io/projected/7441ba42-3158-40d9-9a91-467fef6769cd-kube-api-access-mlfmx\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.151988 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzzgp\" (UniqueName: \"kubernetes.io/projected/49bb97a8-9dba-4ebf-9196-812577411892-kube-api-access-xzzgp\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152103 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-config\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152199 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-svc\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152297 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-combined-ca-bundle\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152370 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-config-data-custom\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152446 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152537 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-config-data\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152608 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bb97a8-9dba-4ebf-9196-812577411892-logs\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152676 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7441ba42-3158-40d9-9a91-467fef6769cd-logs\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152762 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-config-data-custom\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.152925 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.153037 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtlmn\" (UniqueName: \"kubernetes.io/projected/3c944942-c975-4bd5-b6e5-8199b95609a7-kube-api-access-mtlmn\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.153125 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-config-data\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.153234 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-combined-ca-bundle\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.153849 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/49bb97a8-9dba-4ebf-9196-812577411892-logs\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.154165 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7441ba42-3158-40d9-9a91-467fef6769cd-logs\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.160872 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-combined-ca-bundle\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.160958 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-config-data-custom\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.161608 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-config-data-custom\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.162976 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-config-data\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.165658 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/49bb97a8-9dba-4ebf-9196-812577411892-config-data\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.170361 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7441ba42-3158-40d9-9a91-467fef6769cd-combined-ca-bundle\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.173622 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mlfmx\" (UniqueName: \"kubernetes.io/projected/7441ba42-3158-40d9-9a91-467fef6769cd-kube-api-access-mlfmx\") pod \"barbican-worker-75b8cdc675-hwkng\" (UID: \"7441ba42-3158-40d9-9a91-467fef6769cd\") " pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.176278 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzzgp\" (UniqueName: \"kubernetes.io/projected/49bb97a8-9dba-4ebf-9196-812577411892-kube-api-access-xzzgp\") pod \"barbican-keystone-listener-57c9fd48b-mnwmt\" (UID: \"49bb97a8-9dba-4ebf-9196-812577411892\") " pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.246429 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-75b8cdc675-hwkng" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.255133 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.255224 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtlmn\" (UniqueName: \"kubernetes.io/projected/3c944942-c975-4bd5-b6e5-8199b95609a7-kube-api-access-mtlmn\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.255274 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.255338 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-config\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.255373 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-svc\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.255414 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.256648 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.257277 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.258107 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.258599 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-config\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.259128 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-svc\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.268916 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-5cc65645c4-8p2m2"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.271147 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.276918 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.280891 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cc65645c4-8p2m2"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.309282 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtlmn\" (UniqueName: \"kubernetes.io/projected/3c944942-c975-4bd5-b6e5-8199b95609a7-kube-api-access-mtlmn\") pod \"dnsmasq-dns-85ff748b95-vbbkf\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.316416 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.356718 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.357043 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data-custom\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.357082 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z47c\" (UniqueName: \"kubernetes.io/projected/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-kube-api-access-9z47c\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.357172 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-combined-ca-bundle\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.357248 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.357291 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-logs\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.431221 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.459696 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data-custom\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.459923 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z47c\" (UniqueName: \"kubernetes.io/projected/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-kube-api-access-9z47c\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.460108 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-combined-ca-bundle\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.460406 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.460497 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-logs\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.461136 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-logs\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.496028 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-combined-ca-bundle\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.496806 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z47c\" (UniqueName: \"kubernetes.io/projected/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-kube-api-access-9z47c\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.498396 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data-custom\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.512521 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data\") pod \"barbican-api-5cc65645c4-8p2m2\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.626362 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.639775 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.679964 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.885872 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.918751 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.956695 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.959973 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.963159 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.963648 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:16:45 crc kubenswrapper[4712]: I0130 17:16:45.986812 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005147 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8vnb\" (UniqueName: \"kubernetes.io/projected/3770729e-1882-447d-bc3f-46413301437f-kube-api-access-m8vnb\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005251 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005376 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-config-data\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005531 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-scripts\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005583 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-log-httpd\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005707 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.005747 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-run-httpd\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.108190 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-75b8cdc675-hwkng"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110620 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110653 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-run-httpd\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110681 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8vnb\" (UniqueName: \"kubernetes.io/projected/3770729e-1882-447d-bc3f-46413301437f-kube-api-access-m8vnb\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110703 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110749 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-config-data\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110812 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-scripts\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.110834 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-log-httpd\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.111330 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-log-httpd\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.119152 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-run-httpd\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.153943 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-scripts\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.154921 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8vnb\" (UniqueName: \"kubernetes.io/projected/3770729e-1882-447d-bc3f-46413301437f-kube-api-access-m8vnb\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.159230 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.160373 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.161458 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-config-data\") pod \"ceilometer-0\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.167957 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74d94b9977-8pbjf"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.168240 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74d94b9977-8pbjf" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-api" containerID="cri-o://f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634" gracePeriod=30 Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.168663 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-74d94b9977-8pbjf" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-httpd" containerID="cri-o://715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996" gracePeriod=30 Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.192861 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7f6ddf59f7-2n5p6"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.194055 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-74d94b9977-8pbjf" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.162:9696/\": EOF" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.194370 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.239954 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-57c9fd48b-mnwmt"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.253317 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f6ddf59f7-2n5p6"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.289845 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vbbkf"] Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315136 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-httpd-config\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315165 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-internal-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315219 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-ovndb-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315250 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-config\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315271 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnnpc\" (UniqueName: \"kubernetes.io/projected/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-kube-api-access-pnnpc\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315353 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-public-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.315372 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-combined-ca-bundle\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.364046 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.425924 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-public-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.425981 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-combined-ca-bundle\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.426103 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-httpd-config\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.426128 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-internal-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.426186 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-ovndb-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.426226 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-config\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.426256 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnnpc\" (UniqueName: \"kubernetes.io/projected/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-kube-api-access-pnnpc\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.432247 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-internal-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.442991 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-combined-ca-bundle\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.443003 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-ovndb-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.444248 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-config\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.444906 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-httpd-config\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.445438 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-public-tls-certs\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.472424 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnnpc\" (UniqueName: \"kubernetes.io/projected/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-kube-api-access-pnnpc\") pod \"neutron-7f6ddf59f7-2n5p6\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.552576 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.666131 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" event={"ID":"49bb97a8-9dba-4ebf-9196-812577411892","Type":"ContainerStarted","Data":"8c460a3781ee8435adc7df229aedc1dccd54b22897c360a80b0158229e753fb3"} Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.666878 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75b8cdc675-hwkng" event={"ID":"7441ba42-3158-40d9-9a91-467fef6769cd","Type":"ContainerStarted","Data":"6654bbb6a7e2d22c6f02caeb4a2213dd8c869a2ba958c0bb3ccdaf358c6e5d14"} Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.667550 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" event={"ID":"3c944942-c975-4bd5-b6e5-8199b95609a7","Type":"ContainerStarted","Data":"02977ad129694bfed5aa33e6837ba7723513eac9050c370c6f06220d814395d6"} Jan 30 17:16:46 crc kubenswrapper[4712]: I0130 17:16:46.890771 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-5cc65645c4-8p2m2"] Jan 30 17:16:46 crc kubenswrapper[4712]: W0130 17:16:46.935913 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9ad2cb18_dfc8_45eb_9d27_22df3af4e84e.slice/crio-0d1f8481245ab0cf13c86726ae9e13ad9bce9e5a12320f893f6c1f35bec39617 WatchSource:0}: Error finding container 0d1f8481245ab0cf13c86726ae9e13ad9bce9e5a12320f893f6c1f35bec39617: Status 404 returned error can't find the container with id 0d1f8481245ab0cf13c86726ae9e13ad9bce9e5a12320f893f6c1f35bec39617 Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.268806 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.649921 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7f6ddf59f7-2n5p6"] Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.697642 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerStarted","Data":"7389f7c91301c14df17b6eb9ca04b48255ec5603180c540b768559fcaead26f8"} Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.734859 4712 generic.go:334] "Generic (PLEG): container finished" podID="dcb48170-513b-48ad-a97b-0612fb16c386" containerID="715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996" exitCode=0 Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.735267 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94b9977-8pbjf" event={"ID":"dcb48170-513b-48ad-a97b-0612fb16c386","Type":"ContainerDied","Data":"715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996"} Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.740638 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc65645c4-8p2m2" event={"ID":"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e","Type":"ContainerStarted","Data":"ee7b3c100b56e6f5861ab1740fcbf2da2866e0589a8a90b38916b8dd8867d9e2"} Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.740686 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc65645c4-8p2m2" event={"ID":"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e","Type":"ContainerStarted","Data":"0d1f8481245ab0cf13c86726ae9e13ad9bce9e5a12320f893f6c1f35bec39617"} Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.748342 4712 generic.go:334] "Generic (PLEG): container finished" podID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerID="abdb1f2c9aae4430ee1f6c60b3b13c8c7a36a4aaf9aafee1a0d9903c23cf8cab" exitCode=0 Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.748510 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" event={"ID":"3c944942-c975-4bd5-b6e5-8199b95609a7","Type":"ContainerDied","Data":"abdb1f2c9aae4430ee1f6c60b3b13c8c7a36a4aaf9aafee1a0d9903c23cf8cab"} Jan 30 17:16:47 crc kubenswrapper[4712]: I0130 17:16:47.851909 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3896ac30-4d2d-4bc2-bfc3-4352d7d586de" path="/var/lib/kubelet/pods/3896ac30-4d2d-4bc2-bfc3-4352d7d586de/volumes" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.606426 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.702347 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-internal-tls-certs\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.702916 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-combined-ca-bundle\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.703221 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-ovndb-tls-certs\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.703369 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-public-tls-certs\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.703405 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-httpd-config\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.704045 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5fsp\" (UniqueName: \"kubernetes.io/projected/dcb48170-513b-48ad-a97b-0612fb16c386-kube-api-access-j5fsp\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.704081 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-config\") pod \"dcb48170-513b-48ad-a97b-0612fb16c386\" (UID: \"dcb48170-513b-48ad-a97b-0612fb16c386\") " Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.716825 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.721762 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb48170-513b-48ad-a97b-0612fb16c386-kube-api-access-j5fsp" (OuterVolumeSpecName: "kube-api-access-j5fsp") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "kube-api-access-j5fsp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.797876 4712 generic.go:334] "Generic (PLEG): container finished" podID="dcb48170-513b-48ad-a97b-0612fb16c386" containerID="f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634" exitCode=0 Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.798012 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94b9977-8pbjf" event={"ID":"dcb48170-513b-48ad-a97b-0612fb16c386","Type":"ContainerDied","Data":"f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.798066 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-74d94b9977-8pbjf" event={"ID":"dcb48170-513b-48ad-a97b-0612fb16c386","Type":"ContainerDied","Data":"5ed0b14a7ed25deaca84f26bbe94060f70bf322695a4c3c239675397b94feec2"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.798091 4712 scope.go:117] "RemoveContainer" containerID="715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.798395 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-74d94b9977-8pbjf" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.808408 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5fsp\" (UniqueName: \"kubernetes.io/projected/dcb48170-513b-48ad-a97b-0612fb16c386-kube-api-access-j5fsp\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.808445 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.809074 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.814678 4712 generic.go:334] "Generic (PLEG): container finished" podID="3c24ed25-f06f-494d-9fd5-2077c052db31" containerID="7044a23a75fa9d1cbe45ab912580d60ec45c452b219704a72a61230af590edd6" exitCode=0 Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.814826 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9gcv2" event={"ID":"3c24ed25-f06f-494d-9fd5-2077c052db31","Type":"ContainerDied","Data":"7044a23a75fa9d1cbe45ab912580d60ec45c452b219704a72a61230af590edd6"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.833243 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-config" (OuterVolumeSpecName: "config") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.833476 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc65645c4-8p2m2" event={"ID":"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e","Type":"ContainerStarted","Data":"92a634e61c6e87c7ca6cab19cb6cb0f636e8094e309369c1d2e6b244d0b6fd5b"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.834502 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.834550 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.853435 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f6ddf59f7-2n5p6" event={"ID":"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e","Type":"ContainerStarted","Data":"993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.853517 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f6ddf59f7-2n5p6" event={"ID":"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e","Type":"ContainerStarted","Data":"d8b36e20b091b5275f37954d946a46ed8f02362382a84cdd36391046b34f3c41"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.892834 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.895398 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-5cc65645c4-8p2m2" podStartSLOduration=3.895376953 podStartE2EDuration="3.895376953s" podCreationTimestamp="2026-01-30 17:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:48.865869694 +0000 UTC m=+1345.772879223" watchObservedRunningTime="2026-01-30 17:16:48.895376953 +0000 UTC m=+1345.802386422" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.910208 4712 scope.go:117] "RemoveContainer" containerID="f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.911055 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.911054 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" event={"ID":"3c944942-c975-4bd5-b6e5-8199b95609a7","Type":"ContainerStarted","Data":"6dea66d7adcf9ee5244803ab89b51220fe1c7488a1df8b1f7f00a90fed5ce0cf"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.911235 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.913373 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.913406 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.924170 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.930322 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerStarted","Data":"2abff2a39f69c92d6b6f1a7bd3de162fe1a94708d72b57a74c331880b4618230"} Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.943968 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" podStartSLOduration=3.943921729 podStartE2EDuration="3.943921729s" podCreationTimestamp="2026-01-30 17:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:48.936200754 +0000 UTC m=+1345.843210223" watchObservedRunningTime="2026-01-30 17:16:48.943921729 +0000 UTC m=+1345.850931198" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.955390 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "dcb48170-513b-48ad-a97b-0612fb16c386" (UID: "dcb48170-513b-48ad-a97b-0612fb16c386"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.997545 4712 scope.go:117] "RemoveContainer" containerID="715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996" Jan 30 17:16:48 crc kubenswrapper[4712]: E0130 17:16:48.999120 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996\": container with ID starting with 715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996 not found: ID does not exist" containerID="715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.999152 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996"} err="failed to get container status \"715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996\": rpc error: code = NotFound desc = could not find container \"715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996\": container with ID starting with 715cc251bc6e08124e523fcd00030cab0baf4ab189117fa8fe39cb5b03275996 not found: ID does not exist" Jan 30 17:16:48 crc kubenswrapper[4712]: I0130 17:16:48.999172 4712 scope.go:117] "RemoveContainer" containerID="f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634" Jan 30 17:16:49 crc kubenswrapper[4712]: E0130 17:16:48.999501 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634\": container with ID starting with f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634 not found: ID does not exist" containerID="f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634" Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:48.999522 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634"} err="failed to get container status \"f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634\": rpc error: code = NotFound desc = could not find container \"f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634\": container with ID starting with f03aa97520b2a348873bb997ee14bb11f4f74b129d84afa15b6ffa5b2046f634 not found: ID does not exist" Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.019789 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.019855 4712 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/dcb48170-513b-48ad-a97b-0612fb16c386-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.168881 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-74d94b9977-8pbjf"] Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.193632 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-74d94b9977-8pbjf"] Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.818048 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" path="/var/lib/kubelet/pods/dcb48170-513b-48ad-a97b-0612fb16c386/volumes" Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.948361 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerStarted","Data":"4e80187a3b6c9283da731ffe5a293d4662eca7d098dad2dcd88a859869314be1"} Jan 30 17:16:49 crc kubenswrapper[4712]: I0130 17:16:49.955036 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f6ddf59f7-2n5p6" event={"ID":"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e","Type":"ContainerStarted","Data":"02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146"} Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.004379 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7f6ddf59f7-2n5p6" podStartSLOduration=4.00436235 podStartE2EDuration="4.00436235s" podCreationTimestamp="2026-01-30 17:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:50.000790754 +0000 UTC m=+1346.907800233" watchObservedRunningTime="2026-01-30 17:16:50.00436235 +0000 UTC m=+1346.911371819" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.806906 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9gcv2" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.859401 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-combined-ca-bundle\") pod \"3c24ed25-f06f-494d-9fd5-2077c052db31\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.859515 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hg828\" (UniqueName: \"kubernetes.io/projected/3c24ed25-f06f-494d-9fd5-2077c052db31-kube-api-access-hg828\") pod \"3c24ed25-f06f-494d-9fd5-2077c052db31\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.859622 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-config-data\") pod \"3c24ed25-f06f-494d-9fd5-2077c052db31\" (UID: \"3c24ed25-f06f-494d-9fd5-2077c052db31\") " Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.905148 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c24ed25-f06f-494d-9fd5-2077c052db31-kube-api-access-hg828" (OuterVolumeSpecName: "kube-api-access-hg828") pod "3c24ed25-f06f-494d-9fd5-2077c052db31" (UID: "3c24ed25-f06f-494d-9fd5-2077c052db31"). InnerVolumeSpecName "kube-api-access-hg828". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.918256 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c24ed25-f06f-494d-9fd5-2077c052db31" (UID: "3c24ed25-f06f-494d-9fd5-2077c052db31"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.968877 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.968943 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hg828\" (UniqueName: \"kubernetes.io/projected/3c24ed25-f06f-494d-9fd5-2077c052db31-kube-api-access-hg828\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.989487 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-9gcv2" Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.989677 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-9gcv2" event={"ID":"3c24ed25-f06f-494d-9fd5-2077c052db31","Type":"ContainerDied","Data":"95c75feffa0b4af4dfb52e8b01ffea788aebd92ddeda2a530b5206f5a299d645"} Jan 30 17:16:50 crc kubenswrapper[4712]: I0130 17:16:50.989731 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="95c75feffa0b4af4dfb52e8b01ffea788aebd92ddeda2a530b5206f5a299d645" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.005976 4712 generic.go:334] "Generic (PLEG): container finished" podID="2ef9729d-cbbc-4354-98e4-a9e07651518e" containerID="e264b53f3868c5a390c29891442008f13f5c8c52760ff372f2898b802d090802" exitCode=0 Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.007346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-78jqx" event={"ID":"2ef9729d-cbbc-4354-98e4-a9e07651518e","Type":"ContainerDied","Data":"e264b53f3868c5a390c29891442008f13f5c8c52760ff372f2898b802d090802"} Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.007577 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.025289 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-config-data" (OuterVolumeSpecName: "config-data") pod "3c24ed25-f06f-494d-9fd5-2077c052db31" (UID: "3c24ed25-f06f-494d-9fd5-2077c052db31"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.071123 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c24ed25-f06f-494d-9fd5-2077c052db31-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297093 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-84958ddfbd-52vdv"] Jan 30 17:16:51 crc kubenswrapper[4712]: E0130 17:16:51.297484 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c24ed25-f06f-494d-9fd5-2077c052db31" containerName="heat-db-sync" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297500 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c24ed25-f06f-494d-9fd5-2077c052db31" containerName="heat-db-sync" Jan 30 17:16:51 crc kubenswrapper[4712]: E0130 17:16:51.297513 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-httpd" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297522 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-httpd" Jan 30 17:16:51 crc kubenswrapper[4712]: E0130 17:16:51.297541 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-api" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297547 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-api" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297719 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-api" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297732 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c24ed25-f06f-494d-9fd5-2077c052db31" containerName="heat-db-sync" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.297745 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb48170-513b-48ad-a97b-0612fb16c386" containerName="neutron-httpd" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.298610 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.305180 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.306199 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.323810 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84958ddfbd-52vdv"] Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382225 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-internal-tls-certs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382614 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v9pn\" (UniqueName: \"kubernetes.io/projected/777b9322-044d-4461-9d82-9854438205fc-kube-api-access-7v9pn\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382648 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-public-tls-certs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382694 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-combined-ca-bundle\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382713 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-config-data-custom\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382730 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-config-data\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.382782 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/777b9322-044d-4461-9d82-9854438205fc-logs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.491372 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-combined-ca-bundle\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.491647 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-config-data-custom\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.491784 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-config-data\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.493640 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/777b9322-044d-4461-9d82-9854438205fc-logs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.494952 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-internal-tls-certs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.495052 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v9pn\" (UniqueName: \"kubernetes.io/projected/777b9322-044d-4461-9d82-9854438205fc-kube-api-access-7v9pn\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.495117 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-public-tls-certs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.495701 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/777b9322-044d-4461-9d82-9854438205fc-logs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.500349 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-config-data\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.500363 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-config-data-custom\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.504467 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-combined-ca-bundle\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.509488 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-internal-tls-certs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.516044 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/777b9322-044d-4461-9d82-9854438205fc-public-tls-certs\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.523546 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v9pn\" (UniqueName: \"kubernetes.io/projected/777b9322-044d-4461-9d82-9854438205fc-kube-api-access-7v9pn\") pod \"barbican-api-84958ddfbd-52vdv\" (UID: \"777b9322-044d-4461-9d82-9854438205fc\") " pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:51 crc kubenswrapper[4712]: I0130 17:16:51.635624 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.619583 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-78jqx" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732209 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef9729d-cbbc-4354-98e4-a9e07651518e-etc-machine-id\") pod \"2ef9729d-cbbc-4354-98e4-a9e07651518e\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732288 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzf52\" (UniqueName: \"kubernetes.io/projected/2ef9729d-cbbc-4354-98e4-a9e07651518e-kube-api-access-jzf52\") pod \"2ef9729d-cbbc-4354-98e4-a9e07651518e\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732285 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ef9729d-cbbc-4354-98e4-a9e07651518e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "2ef9729d-cbbc-4354-98e4-a9e07651518e" (UID: "2ef9729d-cbbc-4354-98e4-a9e07651518e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732371 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-scripts\") pod \"2ef9729d-cbbc-4354-98e4-a9e07651518e\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732388 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-config-data\") pod \"2ef9729d-cbbc-4354-98e4-a9e07651518e\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732468 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-combined-ca-bundle\") pod \"2ef9729d-cbbc-4354-98e4-a9e07651518e\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732500 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-db-sync-config-data\") pod \"2ef9729d-cbbc-4354-98e4-a9e07651518e\" (UID: \"2ef9729d-cbbc-4354-98e4-a9e07651518e\") " Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.732879 4712 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2ef9729d-cbbc-4354-98e4-a9e07651518e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.756580 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-scripts" (OuterVolumeSpecName: "scripts") pod "2ef9729d-cbbc-4354-98e4-a9e07651518e" (UID: "2ef9729d-cbbc-4354-98e4-a9e07651518e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.775139 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ef9729d-cbbc-4354-98e4-a9e07651518e-kube-api-access-jzf52" (OuterVolumeSpecName: "kube-api-access-jzf52") pod "2ef9729d-cbbc-4354-98e4-a9e07651518e" (UID: "2ef9729d-cbbc-4354-98e4-a9e07651518e"). InnerVolumeSpecName "kube-api-access-jzf52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.818909 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ef9729d-cbbc-4354-98e4-a9e07651518e" (UID: "2ef9729d-cbbc-4354-98e4-a9e07651518e"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.872878 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-84958ddfbd-52vdv"] Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.876327 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzf52\" (UniqueName: \"kubernetes.io/projected/2ef9729d-cbbc-4354-98e4-a9e07651518e-kube-api-access-jzf52\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.876354 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.876364 4712 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:52 crc kubenswrapper[4712]: W0130 17:16:52.919484 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod777b9322_044d_4461_9d82_9854438205fc.slice/crio-871aa52699abd2d98a3739ea9af4bb6ed95f8d3989e963d57b1aa404fefbbf91 WatchSource:0}: Error finding container 871aa52699abd2d98a3739ea9af4bb6ed95f8d3989e963d57b1aa404fefbbf91: Status 404 returned error can't find the container with id 871aa52699abd2d98a3739ea9af4bb6ed95f8d3989e963d57b1aa404fefbbf91 Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.932045 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ef9729d-cbbc-4354-98e4-a9e07651518e" (UID: "2ef9729d-cbbc-4354-98e4-a9e07651518e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.974222 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-config-data" (OuterVolumeSpecName: "config-data") pod "2ef9729d-cbbc-4354-98e4-a9e07651518e" (UID: "2ef9729d-cbbc-4354-98e4-a9e07651518e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.982103 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:52 crc kubenswrapper[4712]: I0130 17:16:52.982140 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ef9729d-cbbc-4354-98e4-a9e07651518e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.038489 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerStarted","Data":"7747f5be190ec75eb1e9bd4b2e5287e50b0b7f3283a8928f3616bcdef7e41c73"} Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.039774 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84958ddfbd-52vdv" event={"ID":"777b9322-044d-4461-9d82-9854438205fc","Type":"ContainerStarted","Data":"871aa52699abd2d98a3739ea9af4bb6ed95f8d3989e963d57b1aa404fefbbf91"} Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.043544 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" event={"ID":"49bb97a8-9dba-4ebf-9196-812577411892","Type":"ContainerStarted","Data":"5e241aa599cafd82da41d8cc6327d72ca42bbb3d8938665a68fbc31830a4b06e"} Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.045853 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75b8cdc675-hwkng" event={"ID":"7441ba42-3158-40d9-9a91-467fef6769cd","Type":"ContainerStarted","Data":"cd3e948a14b533c65890b12124598b4ae3d3c0b9272930a400f1321bd37be66e"} Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.046848 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-78jqx" event={"ID":"2ef9729d-cbbc-4354-98e4-a9e07651518e","Type":"ContainerDied","Data":"1a03036353bdf44f64cf0d2100bf8fe4b0197d3e8e8381e564311abc9da9aafa"} Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.046870 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a03036353bdf44f64cf0d2100bf8fe4b0197d3e8e8381e564311abc9da9aafa" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.046914 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-78jqx" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.518077 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:16:53 crc kubenswrapper[4712]: E0130 17:16:53.518743 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ef9729d-cbbc-4354-98e4-a9e07651518e" containerName="cinder-db-sync" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.518758 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ef9729d-cbbc-4354-98e4-a9e07651518e" containerName="cinder-db-sync" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.518974 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ef9729d-cbbc-4354-98e4-a9e07651518e" containerName="cinder-db-sync" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.526486 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.541242 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-d7tcp" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.543212 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.543355 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.550090 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.573614 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.596641 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.596757 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.596854 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4fgx\" (UniqueName: \"kubernetes.io/projected/9fa2300d-0b2c-4e30-afb5-882b5e38841f-kube-api-access-v4fgx\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.596939 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.597003 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fa2300d-0b2c-4e30-afb5-882b5e38841f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.597025 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.699815 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.699867 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.699899 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4fgx\" (UniqueName: \"kubernetes.io/projected/9fa2300d-0b2c-4e30-afb5-882b5e38841f-kube-api-access-v4fgx\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.699936 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.699973 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fa2300d-0b2c-4e30-afb5-882b5e38841f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.699989 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.706992 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fa2300d-0b2c-4e30-afb5-882b5e38841f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.711121 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.712546 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-scripts\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.712547 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.723684 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.772904 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vbbkf"] Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.789121 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerName="dnsmasq-dns" containerID="cri-o://6dea66d7adcf9ee5244803ab89b51220fe1c7488a1df8b1f7f00a90fed5ce0cf" gracePeriod=10 Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.819512 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4fgx\" (UniqueName: \"kubernetes.io/projected/9fa2300d-0b2c-4e30-afb5-882b5e38841f-kube-api-access-v4fgx\") pod \"cinder-scheduler-0\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.867036 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.884216 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.941952 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4jhw"] Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.943842 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:53 crc kubenswrapper[4712]: I0130 17:16:53.982031 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4jhw"] Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.019102 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.019167 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-config\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.019227 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.021513 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.021624 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.021683 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cdc8\" (UniqueName: \"kubernetes.io/projected/128af9ea-eb98-4631-9e61-af1a9d26e246-kube-api-access-7cdc8\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.139418 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.139694 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-config\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.139788 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.139826 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.139852 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.139899 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cdc8\" (UniqueName: \"kubernetes.io/projected/128af9ea-eb98-4631-9e61-af1a9d26e246-kube-api-access-7cdc8\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.153114 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.153678 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-config\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.154141 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.165872 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.166034 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.176700 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" event={"ID":"49bb97a8-9dba-4ebf-9196-812577411892","Type":"ContainerStarted","Data":"555f78117ac9826be46d8fa0773c35ebfedea2ea5c443cb2c031fb39b732ad3a"} Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.179336 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cdc8\" (UniqueName: \"kubernetes.io/projected/128af9ea-eb98-4631-9e61-af1a9d26e246-kube-api-access-7cdc8\") pod \"dnsmasq-dns-5c9776ccc5-t4jhw\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.200397 4712 generic.go:334] "Generic (PLEG): container finished" podID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerID="6dea66d7adcf9ee5244803ab89b51220fe1c7488a1df8b1f7f00a90fed5ce0cf" exitCode=0 Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.200514 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" event={"ID":"3c944942-c975-4bd5-b6e5-8199b95609a7","Type":"ContainerDied","Data":"6dea66d7adcf9ee5244803ab89b51220fe1c7488a1df8b1f7f00a90fed5ce0cf"} Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.211162 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84958ddfbd-52vdv" event={"ID":"777b9322-044d-4461-9d82-9854438205fc","Type":"ContainerStarted","Data":"60fc5670986b141693478121f29c7157e415e3db9a4f1bea8b38b7c19fdd1ff7"} Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.273625 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.275641 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.283342 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.301400 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.310740 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-57c9fd48b-mnwmt" podStartSLOduration=4.373316614 podStartE2EDuration="10.310720314s" podCreationTimestamp="2026-01-30 17:16:44 +0000 UTC" firstStartedPulling="2026-01-30 17:16:46.17368714 +0000 UTC m=+1343.080696609" lastFinishedPulling="2026-01-30 17:16:52.11109084 +0000 UTC m=+1349.018100309" observedRunningTime="2026-01-30 17:16:54.223358275 +0000 UTC m=+1351.130367744" watchObservedRunningTime="2026-01-30 17:16:54.310720314 +0000 UTC m=+1351.217729783" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.355844 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.355933 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data-custom\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.355955 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.355990 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.356043 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbb8r\" (UniqueName: \"kubernetes.io/projected/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-kube-api-access-vbb8r\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.356083 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-logs\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.356138 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-scripts\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.405788 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458031 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-scripts\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458335 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458435 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data-custom\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458503 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458579 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458671 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbb8r\" (UniqueName: \"kubernetes.io/projected/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-kube-api-access-vbb8r\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.458753 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-logs\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.462259 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-logs\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.468281 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-etc-machine-id\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.471401 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-scripts\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.473788 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.484300 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.493942 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data-custom\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.526707 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbb8r\" (UniqueName: \"kubernetes.io/projected/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-kube-api-access-vbb8r\") pod \"cinder-api-0\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: I0130 17:16:54.601191 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:16:54 crc kubenswrapper[4712]: E0130 17:16:54.864216 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3c944942_c975_4bd5_b6e5_8199b95609a7.slice/crio-conmon-6dea66d7adcf9ee5244803ab89b51220fe1c7488a1df8b1f7f00a90fed5ce0cf.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.059377 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.081681 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.081750 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.082479 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"ca8d05a9668753b2823d10544b8f8bbf3f28554634a29614ced82a2e411f15e2"} pod="openstack/horizon-56f8b66d48-7wr47" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.082506 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" containerID="cri-o://ca8d05a9668753b2823d10544b8f8bbf3f28554634a29614ced82a2e411f15e2" gracePeriod=30 Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.148544 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.210093 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-svc\") pod \"3c944942-c975-4bd5-b6e5-8199b95609a7\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.210180 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-nb\") pod \"3c944942-c975-4bd5-b6e5-8199b95609a7\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.210235 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-sb\") pod \"3c944942-c975-4bd5-b6e5-8199b95609a7\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.210297 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-swift-storage-0\") pod \"3c944942-c975-4bd5-b6e5-8199b95609a7\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.210358 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtlmn\" (UniqueName: \"kubernetes.io/projected/3c944942-c975-4bd5-b6e5-8199b95609a7-kube-api-access-mtlmn\") pod \"3c944942-c975-4bd5-b6e5-8199b95609a7\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.210456 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-config\") pod \"3c944942-c975-4bd5-b6e5-8199b95609a7\" (UID: \"3c944942-c975-4bd5-b6e5-8199b95609a7\") " Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.235106 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c944942-c975-4bd5-b6e5-8199b95609a7-kube-api-access-mtlmn" (OuterVolumeSpecName: "kube-api-access-mtlmn") pod "3c944942-c975-4bd5-b6e5-8199b95609a7" (UID: "3c944942-c975-4bd5-b6e5-8199b95609a7"). InnerVolumeSpecName "kube-api-access-mtlmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.246154 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-84958ddfbd-52vdv" event={"ID":"777b9322-044d-4461-9d82-9854438205fc","Type":"ContainerStarted","Data":"47f0d7b165e7d5fa2adbd3a983c3ee676834e66b6a106e3a23fc9e740283be8b"} Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.246834 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.247316 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.247982 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9fa2300d-0b2c-4e30-afb5-882b5e38841f","Type":"ContainerStarted","Data":"894786e514c28a114f9ca72e9ef45f8779fedbbeb70799681e2bc00affa1ed85"} Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.260096 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-75b8cdc675-hwkng" event={"ID":"7441ba42-3158-40d9-9a91-467fef6769cd","Type":"ContainerStarted","Data":"ec92c52d3351ef45965fead6671c2e4dc447d5c5a1c36e9d244bf188f0053313"} Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.296956 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.297188 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-vbbkf" event={"ID":"3c944942-c975-4bd5-b6e5-8199b95609a7","Type":"ContainerDied","Data":"02977ad129694bfed5aa33e6837ba7723513eac9050c370c6f06220d814395d6"} Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.297242 4712 scope.go:117] "RemoveContainer" containerID="6dea66d7adcf9ee5244803ab89b51220fe1c7488a1df8b1f7f00a90fed5ce0cf" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.314029 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtlmn\" (UniqueName: \"kubernetes.io/projected/3c944942-c975-4bd5-b6e5-8199b95609a7-kube-api-access-mtlmn\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.325878 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-84958ddfbd-52vdv" podStartSLOduration=4.325857587 podStartE2EDuration="4.325857587s" podCreationTimestamp="2026-01-30 17:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:55.292469694 +0000 UTC m=+1352.199479163" watchObservedRunningTime="2026-01-30 17:16:55.325857587 +0000 UTC m=+1352.232867056" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.355392 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.355476 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.357924 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"0637c6cf8b9543ce9d09aa9b237dd18cd14c4de10f84d30d44b4a331a3589fa8"} pod="openstack/horizon-64655dbc44-pvj2c" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.357953 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" containerID="cri-o://0637c6cf8b9543ce9d09aa9b237dd18cd14c4de10f84d30d44b4a331a3589fa8" gracePeriod=30 Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.394107 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-75b8cdc675-hwkng" podStartSLOduration=5.28591281 podStartE2EDuration="11.394080386s" podCreationTimestamp="2026-01-30 17:16:44 +0000 UTC" firstStartedPulling="2026-01-30 17:16:46.006096051 +0000 UTC m=+1342.913105530" lastFinishedPulling="2026-01-30 17:16:52.114263637 +0000 UTC m=+1349.021273106" observedRunningTime="2026-01-30 17:16:55.336322608 +0000 UTC m=+1352.243332077" watchObservedRunningTime="2026-01-30 17:16:55.394080386 +0000 UTC m=+1352.301089855" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.433053 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3c944942-c975-4bd5-b6e5-8199b95609a7" (UID: "3c944942-c975-4bd5-b6e5-8199b95609a7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.437162 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3c944942-c975-4bd5-b6e5-8199b95609a7" (UID: "3c944942-c975-4bd5-b6e5-8199b95609a7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.461693 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4jhw"] Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.480386 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "3c944942-c975-4bd5-b6e5-8199b95609a7" (UID: "3c944942-c975-4bd5-b6e5-8199b95609a7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.488719 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3c944942-c975-4bd5-b6e5-8199b95609a7" (UID: "3c944942-c975-4bd5-b6e5-8199b95609a7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.495326 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-config" (OuterVolumeSpecName: "config") pod "3c944942-c975-4bd5-b6e5-8199b95609a7" (UID: "3c944942-c975-4bd5-b6e5-8199b95609a7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.524332 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.524609 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.524708 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.524811 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.524892 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/3c944942-c975-4bd5-b6e5-8199b95609a7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.576251 4712 scope.go:117] "RemoveContainer" containerID="abdb1f2c9aae4430ee1f6c60b3b13c8c7a36a4aaf9aafee1a0d9903c23cf8cab" Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.658299 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.687923 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vbbkf"] Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.719632 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-vbbkf"] Jan 30 17:16:55 crc kubenswrapper[4712]: I0130 17:16:55.867226 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" path="/var/lib/kubelet/pods/3c944942-c975-4bd5-b6e5-8199b95609a7/volumes" Jan 30 17:16:56 crc kubenswrapper[4712]: I0130 17:16:56.343122 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" event={"ID":"128af9ea-eb98-4631-9e61-af1a9d26e246","Type":"ContainerStarted","Data":"26d61a699d389e54320d94d2d64164245740ced48e08223cb1dca68b0ccd55a0"} Jan 30 17:16:56 crc kubenswrapper[4712]: I0130 17:16:56.373058 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1ee0a8fb-a77e-4786-9ba2-93805c9cb272","Type":"ContainerStarted","Data":"23d62501cd188a2342cde77fb5952d029a66d62c2537913e65832cd50e35c010"} Jan 30 17:16:57 crc kubenswrapper[4712]: I0130 17:16:57.391819 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerStarted","Data":"41bb890082e2894c9e3d503a74b8fafda69c11b38b44180f090ea29485338140"} Jan 30 17:16:57 crc kubenswrapper[4712]: I0130 17:16:57.392124 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:16:57 crc kubenswrapper[4712]: I0130 17:16:57.398133 4712 generic.go:334] "Generic (PLEG): container finished" podID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerID="13ae25ae0ef25990774e239cac23a8823334e861a162e3c9700b7555ca6e960c" exitCode=0 Jan 30 17:16:57 crc kubenswrapper[4712]: I0130 17:16:57.399862 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" event={"ID":"128af9ea-eb98-4631-9e61-af1a9d26e246","Type":"ContainerDied","Data":"13ae25ae0ef25990774e239cac23a8823334e861a162e3c9700b7555ca6e960c"} Jan 30 17:16:57 crc kubenswrapper[4712]: I0130 17:16:57.450745 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=4.19230008 podStartE2EDuration="12.450728633s" podCreationTimestamp="2026-01-30 17:16:45 +0000 UTC" firstStartedPulling="2026-01-30 17:16:47.30944097 +0000 UTC m=+1344.216450439" lastFinishedPulling="2026-01-30 17:16:55.567869523 +0000 UTC m=+1352.474878992" observedRunningTime="2026-01-30 17:16:57.439479103 +0000 UTC m=+1354.346488572" watchObservedRunningTime="2026-01-30 17:16:57.450728633 +0000 UTC m=+1354.357738102" Jan 30 17:16:57 crc kubenswrapper[4712]: I0130 17:16:57.825514 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:16:58 crc kubenswrapper[4712]: I0130 17:16:58.429851 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9fa2300d-0b2c-4e30-afb5-882b5e38841f","Type":"ContainerStarted","Data":"810467ba8b1220aefae522c943dae8124ff7f6181b7f0bb70f36c8a010508e72"} Jan 30 17:16:58 crc kubenswrapper[4712]: I0130 17:16:58.442585 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" event={"ID":"128af9ea-eb98-4631-9e61-af1a9d26e246","Type":"ContainerStarted","Data":"dece00b57cd9d38e59bc722d45813a4556c4f0da6b0b84120f39417b0893c56c"} Jan 30 17:16:58 crc kubenswrapper[4712]: I0130 17:16:58.444039 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:16:58 crc kubenswrapper[4712]: I0130 17:16:58.472101 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" podStartSLOduration=5.472082715 podStartE2EDuration="5.472082715s" podCreationTimestamp="2026-01-30 17:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:58.471367627 +0000 UTC m=+1355.378377106" watchObservedRunningTime="2026-01-30 17:16:58.472082715 +0000 UTC m=+1355.379092184" Jan 30 17:16:58 crc kubenswrapper[4712]: I0130 17:16:58.477400 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1ee0a8fb-a77e-4786-9ba2-93805c9cb272","Type":"ContainerStarted","Data":"2f3c54c9fb87787b4768830d81e564dabc1e305c8bc10f75eae515275e66a603"} Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.486504 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9fa2300d-0b2c-4e30-afb5-882b5e38841f","Type":"ContainerStarted","Data":"98e43717ee2a14345efea7fcf142c12433087f80ad2c6664ba7ccaea5bd44ff4"} Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.490640 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api-log" containerID="cri-o://2f3c54c9fb87787b4768830d81e564dabc1e305c8bc10f75eae515275e66a603" gracePeriod=30 Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.491003 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1ee0a8fb-a77e-4786-9ba2-93805c9cb272","Type":"ContainerStarted","Data":"8cdc8ab31dc840103fc05752760fda44401efb085f84ebbc15d25e880264c843"} Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.491122 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.491208 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" containerID="cri-o://8cdc8ab31dc840103fc05752760fda44401efb085f84ebbc15d25e880264c843" gracePeriod=30 Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.525976 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.5288224889999995 podStartE2EDuration="6.525958807s" podCreationTimestamp="2026-01-30 17:16:53 +0000 UTC" firstStartedPulling="2026-01-30 17:16:55.165203145 +0000 UTC m=+1352.072212614" lastFinishedPulling="2026-01-30 17:16:56.162339463 +0000 UTC m=+1353.069348932" observedRunningTime="2026-01-30 17:16:59.523906178 +0000 UTC m=+1356.430915667" watchObservedRunningTime="2026-01-30 17:16:59.525958807 +0000 UTC m=+1356.432968276" Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.580194 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.58017501 podStartE2EDuration="5.58017501s" podCreationTimestamp="2026-01-30 17:16:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:16:59.578138962 +0000 UTC m=+1356.485148431" watchObservedRunningTime="2026-01-30 17:16:59.58017501 +0000 UTC m=+1356.487184479" Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.730544 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:16:59 crc kubenswrapper[4712]: I0130 17:16:59.731314 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:00 crc kubenswrapper[4712]: I0130 17:17:00.499495 4712 generic.go:334] "Generic (PLEG): container finished" podID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerID="2f3c54c9fb87787b4768830d81e564dabc1e305c8bc10f75eae515275e66a603" exitCode=143 Jan 30 17:17:00 crc kubenswrapper[4712]: I0130 17:17:00.499593 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1ee0a8fb-a77e-4786-9ba2-93805c9cb272","Type":"ContainerDied","Data":"2f3c54c9fb87787b4768830d81e564dabc1e305c8bc10f75eae515275e66a603"} Jan 30 17:17:00 crc kubenswrapper[4712]: I0130 17:17:00.725174 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:00 crc kubenswrapper[4712]: I0130 17:17:00.725713 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:03 crc kubenswrapper[4712]: I0130 17:17:03.490333 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:17:03 crc kubenswrapper[4712]: I0130 17:17:03.509439 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:17:03 crc kubenswrapper[4712]: I0130 17:17:03.885659 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 17:17:03 crc kubenswrapper[4712]: I0130 17:17:03.887378 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.173:8080/\": dial tcp 10.217.0.173:8080: connect: connection refused" Jan 30 17:17:04 crc kubenswrapper[4712]: I0130 17:17:04.251986 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:17:04 crc kubenswrapper[4712]: I0130 17:17:04.410104 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:17:04 crc kubenswrapper[4712]: I0130 17:17:04.511389 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kfjwh"] Jan 30 17:17:04 crc kubenswrapper[4712]: I0130 17:17:04.511775 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" containerName="dnsmasq-dns" containerID="cri-o://b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0" gracePeriod=10 Jan 30 17:17:04 crc kubenswrapper[4712]: I0130 17:17:04.823934 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:04 crc kubenswrapper[4712]: I0130 17:17:04.823993 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.276053 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.377407 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-config\") pod \"c5c55ed2-b2de-42e8-865c-81436c478565\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.377927 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27rfg\" (UniqueName: \"kubernetes.io/projected/c5c55ed2-b2de-42e8-865c-81436c478565-kube-api-access-27rfg\") pod \"c5c55ed2-b2de-42e8-865c-81436c478565\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.378054 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-svc\") pod \"c5c55ed2-b2de-42e8-865c-81436c478565\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.378171 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-sb\") pod \"c5c55ed2-b2de-42e8-865c-81436c478565\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.378294 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-nb\") pod \"c5c55ed2-b2de-42e8-865c-81436c478565\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.378445 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-swift-storage-0\") pod \"c5c55ed2-b2de-42e8-865c-81436c478565\" (UID: \"c5c55ed2-b2de-42e8-865c-81436c478565\") " Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.391116 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5c55ed2-b2de-42e8-865c-81436c478565-kube-api-access-27rfg" (OuterVolumeSpecName: "kube-api-access-27rfg") pod "c5c55ed2-b2de-42e8-865c-81436c478565" (UID: "c5c55ed2-b2de-42e8-865c-81436c478565"). InnerVolumeSpecName "kube-api-access-27rfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.480443 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27rfg\" (UniqueName: \"kubernetes.io/projected/c5c55ed2-b2de-42e8-865c-81436c478565-kube-api-access-27rfg\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.483756 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c5c55ed2-b2de-42e8-865c-81436c478565" (UID: "c5c55ed2-b2de-42e8-865c-81436c478565"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.495065 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-config" (OuterVolumeSpecName: "config") pod "c5c55ed2-b2de-42e8-865c-81436c478565" (UID: "c5c55ed2-b2de-42e8-865c-81436c478565"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.522629 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c5c55ed2-b2de-42e8-865c-81436c478565" (UID: "c5c55ed2-b2de-42e8-865c-81436c478565"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.544257 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c5c55ed2-b2de-42e8-865c-81436c478565" (UID: "c5c55ed2-b2de-42e8-865c-81436c478565"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.551160 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c5c55ed2-b2de-42e8-865c-81436c478565" (UID: "c5c55ed2-b2de-42e8-865c-81436c478565"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.574851 4712 generic.go:334] "Generic (PLEG): container finished" podID="c5c55ed2-b2de-42e8-865c-81436c478565" containerID="b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0" exitCode=0 Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.574897 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" event={"ID":"c5c55ed2-b2de-42e8-865c-81436c478565","Type":"ContainerDied","Data":"b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0"} Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.574931 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" event={"ID":"c5c55ed2-b2de-42e8-865c-81436c478565","Type":"ContainerDied","Data":"f67907a3c00340a6eb29a28ca3946c7601c18d536492630e181adf60c842774b"} Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.574949 4712 scope.go:117] "RemoveContainer" containerID="b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.575718 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-kfjwh" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.583295 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.583331 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.583340 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.583348 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.583357 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c5c55ed2-b2de-42e8-865c-81436c478565-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.615834 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kfjwh"] Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.644145 4712 scope.go:117] "RemoveContainer" containerID="88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.665830 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-kfjwh"] Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.722546 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.723132 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.751395 4712 scope.go:117] "RemoveContainer" containerID="b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0" Jan 30 17:17:05 crc kubenswrapper[4712]: E0130 17:17:05.755271 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0\": container with ID starting with b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0 not found: ID does not exist" containerID="b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.755306 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0"} err="failed to get container status \"b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0\": rpc error: code = NotFound desc = could not find container \"b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0\": container with ID starting with b3f40c61d2fcca590f3e4c1abed03bbdd2ff9b45a07dc15ed2dfbe2c214098f0 not found: ID does not exist" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.755327 4712 scope.go:117] "RemoveContainer" containerID="88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216" Jan 30 17:17:05 crc kubenswrapper[4712]: E0130 17:17:05.755885 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216\": container with ID starting with 88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216 not found: ID does not exist" containerID="88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.755903 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216"} err="failed to get container status \"88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216\": rpc error: code = NotFound desc = could not find container \"88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216\": container with ID starting with 88508fc7b1c0195a291d5062ae8970729250329ea9a4ccb4af9a9a0d31cbd216 not found: ID does not exist" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.809070 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.809877 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.817757 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" path="/var/lib/kubelet/pods/c5c55ed2-b2de-42e8-865c-81436c478565/volumes" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.832154 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.881491 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:17:05 crc kubenswrapper[4712]: I0130 17:17:05.923214 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6ddfd55656-dc4w7" Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.011525 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6dd7989794-hdn5g"] Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.011773 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6dd7989794-hdn5g" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-log" containerID="cri-o://78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9" gracePeriod=30 Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.012153 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6dd7989794-hdn5g" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-api" containerID="cri-o://403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf" gracePeriod=30 Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.270943 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.271020 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.583545 4712 generic.go:334] "Generic (PLEG): container finished" podID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerID="78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9" exitCode=143 Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.583598 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dd7989794-hdn5g" event={"ID":"f107ebd6-3359-4995-9a79-70e9719bbbf2","Type":"ContainerDied","Data":"78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9"} Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.648314 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:06 crc kubenswrapper[4712]: I0130 17:17:06.648377 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:09 crc kubenswrapper[4712]: I0130 17:17:09.459375 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:17:09 crc kubenswrapper[4712]: I0130 17:17:09.599860 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 17:17:09 crc kubenswrapper[4712]: I0130 17:17:09.650150 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.175:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:09 crc kubenswrapper[4712]: I0130 17:17:09.652587 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:17:10 crc kubenswrapper[4712]: I0130 17:17:10.614603 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="cinder-scheduler" containerID="cri-o://810467ba8b1220aefae522c943dae8124ff7f6181b7f0bb70f36c8a010508e72" gracePeriod=30 Jan 30 17:17:10 crc kubenswrapper[4712]: I0130 17:17:10.614971 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="probe" containerID="cri-o://98e43717ee2a14345efea7fcf142c12433087f80ad2c6664ba7ccaea5bd44ff4" gracePeriod=30 Jan 30 17:17:10 crc kubenswrapper[4712]: I0130 17:17:10.806098 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:10 crc kubenswrapper[4712]: I0130 17:17:10.806561 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:10 crc kubenswrapper[4712]: I0130 17:17:10.911655 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-7f4784f4d6-zvlhq" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.160033 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.303998 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-internal-tls-certs\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.304056 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-public-tls-certs\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.304079 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-scripts\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.304186 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f107ebd6-3359-4995-9a79-70e9719bbbf2-logs\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.304215 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-combined-ca-bundle\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.304322 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5wk2\" (UniqueName: \"kubernetes.io/projected/f107ebd6-3359-4995-9a79-70e9719bbbf2-kube-api-access-h5wk2\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.304463 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-config-data\") pod \"f107ebd6-3359-4995-9a79-70e9719bbbf2\" (UID: \"f107ebd6-3359-4995-9a79-70e9719bbbf2\") " Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.312965 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f107ebd6-3359-4995-9a79-70e9719bbbf2-logs" (OuterVolumeSpecName: "logs") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.360347 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-scripts" (OuterVolumeSpecName: "scripts") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.360514 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f107ebd6-3359-4995-9a79-70e9719bbbf2-kube-api-access-h5wk2" (OuterVolumeSpecName: "kube-api-access-h5wk2") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "kube-api-access-h5wk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.409763 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f107ebd6-3359-4995-9a79-70e9719bbbf2-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.409813 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h5wk2\" (UniqueName: \"kubernetes.io/projected/f107ebd6-3359-4995-9a79-70e9719bbbf2-kube-api-access-h5wk2\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.409824 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.476074 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.511186 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.527246 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-config-data" (OuterVolumeSpecName: "config-data") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.557262 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.588131 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f107ebd6-3359-4995-9a79-70e9719bbbf2" (UID: "f107ebd6-3359-4995-9a79-70e9719bbbf2"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.612702 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.612735 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.612747 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f107ebd6-3359-4995-9a79-70e9719bbbf2-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.631888 4712 generic.go:334] "Generic (PLEG): container finished" podID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerID="403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf" exitCode=0 Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.631945 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dd7989794-hdn5g" event={"ID":"f107ebd6-3359-4995-9a79-70e9719bbbf2","Type":"ContainerDied","Data":"403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf"} Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.631972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6dd7989794-hdn5g" event={"ID":"f107ebd6-3359-4995-9a79-70e9719bbbf2","Type":"ContainerDied","Data":"8dc502d4452777f284e6464a482bce3bccccb7a6ccce3c7d75700c4e3c9ca403"} Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.631987 4712 scope.go:117] "RemoveContainer" containerID="403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.632105 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6dd7989794-hdn5g" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.695087 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-84958ddfbd-52vdv" podUID="777b9322-044d-4461-9d82-9854438205fc" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.172:9311/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.726719 4712 scope.go:117] "RemoveContainer" containerID="78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.733686 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6dd7989794-hdn5g"] Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.745820 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6dd7989794-hdn5g"] Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.758721 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-84958ddfbd-52vdv" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.794057 4712 scope.go:117] "RemoveContainer" containerID="403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf" Jan 30 17:17:11 crc kubenswrapper[4712]: E0130 17:17:11.794606 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf\": container with ID starting with 403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf not found: ID does not exist" containerID="403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.794655 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf"} err="failed to get container status \"403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf\": rpc error: code = NotFound desc = could not find container \"403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf\": container with ID starting with 403968f65a0457f51661c07813d09439c7aab407d9380e5c3fbf2d8f624467bf not found: ID does not exist" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.794681 4712 scope.go:117] "RemoveContainer" containerID="78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9" Jan 30 17:17:11 crc kubenswrapper[4712]: E0130 17:17:11.797930 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9\": container with ID starting with 78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9 not found: ID does not exist" containerID="78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.797975 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9"} err="failed to get container status \"78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9\": rpc error: code = NotFound desc = could not find container \"78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9\": container with ID starting with 78212e671186fcb84a4b752b03ff3bc73dcb6fb6824a6286a704de2db7d8aac9 not found: ID does not exist" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.819981 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" path="/var/lib/kubelet/pods/f107ebd6-3359-4995-9a79-70e9719bbbf2/volumes" Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.865045 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cc65645c4-8p2m2"] Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.865306 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" containerID="cri-o://ee7b3c100b56e6f5861ab1740fcbf2da2866e0589a8a90b38916b8dd8867d9e2" gracePeriod=30 Jan 30 17:17:11 crc kubenswrapper[4712]: I0130 17:17:11.865757 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" containerID="cri-o://92a634e61c6e87c7ca6cab19cb6cb0f636e8094e309369c1d2e6b244d0b6fd5b" gracePeriod=30 Jan 30 17:17:12 crc kubenswrapper[4712]: I0130 17:17:12.642768 4712 generic.go:334] "Generic (PLEG): container finished" podID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerID="98e43717ee2a14345efea7fcf142c12433087f80ad2c6664ba7ccaea5bd44ff4" exitCode=0 Jan 30 17:17:12 crc kubenswrapper[4712]: I0130 17:17:12.642818 4712 generic.go:334] "Generic (PLEG): container finished" podID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerID="810467ba8b1220aefae522c943dae8124ff7f6181b7f0bb70f36c8a010508e72" exitCode=0 Jan 30 17:17:12 crc kubenswrapper[4712]: I0130 17:17:12.642858 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9fa2300d-0b2c-4e30-afb5-882b5e38841f","Type":"ContainerDied","Data":"98e43717ee2a14345efea7fcf142c12433087f80ad2c6664ba7ccaea5bd44ff4"} Jan 30 17:17:12 crc kubenswrapper[4712]: I0130 17:17:12.642905 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9fa2300d-0b2c-4e30-afb5-882b5e38841f","Type":"ContainerDied","Data":"810467ba8b1220aefae522c943dae8124ff7f6181b7f0bb70f36c8a010508e72"} Jan 30 17:17:12 crc kubenswrapper[4712]: I0130 17:17:12.646463 4712 generic.go:334] "Generic (PLEG): container finished" podID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerID="ee7b3c100b56e6f5861ab1740fcbf2da2866e0589a8a90b38916b8dd8867d9e2" exitCode=143 Jan 30 17:17:12 crc kubenswrapper[4712]: I0130 17:17:12.646505 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc65645c4-8p2m2" event={"ID":"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e","Type":"ContainerDied","Data":"ee7b3c100b56e6f5861ab1740fcbf2da2866e0589a8a90b38916b8dd8867d9e2"} Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.044424 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157343 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4fgx\" (UniqueName: \"kubernetes.io/projected/9fa2300d-0b2c-4e30-afb5-882b5e38841f-kube-api-access-v4fgx\") pod \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157418 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fa2300d-0b2c-4e30-afb5-882b5e38841f-etc-machine-id\") pod \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157457 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-scripts\") pod \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157492 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-combined-ca-bundle\") pod \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157513 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data\") pod \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157519 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9fa2300d-0b2c-4e30-afb5-882b5e38841f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9fa2300d-0b2c-4e30-afb5-882b5e38841f" (UID: "9fa2300d-0b2c-4e30-afb5-882b5e38841f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.157678 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data-custom\") pod \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\" (UID: \"9fa2300d-0b2c-4e30-afb5-882b5e38841f\") " Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.158049 4712 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9fa2300d-0b2c-4e30-afb5-882b5e38841f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.168468 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-scripts" (OuterVolumeSpecName: "scripts") pod "9fa2300d-0b2c-4e30-afb5-882b5e38841f" (UID: "9fa2300d-0b2c-4e30-afb5-882b5e38841f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.169028 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9fa2300d-0b2c-4e30-afb5-882b5e38841f" (UID: "9fa2300d-0b2c-4e30-afb5-882b5e38841f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.170749 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9fa2300d-0b2c-4e30-afb5-882b5e38841f-kube-api-access-v4fgx" (OuterVolumeSpecName: "kube-api-access-v4fgx") pod "9fa2300d-0b2c-4e30-afb5-882b5e38841f" (UID: "9fa2300d-0b2c-4e30-afb5-882b5e38841f"). InnerVolumeSpecName "kube-api-access-v4fgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.236789 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9fa2300d-0b2c-4e30-afb5-882b5e38841f" (UID: "9fa2300d-0b2c-4e30-afb5-882b5e38841f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.260706 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.260747 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4fgx\" (UniqueName: \"kubernetes.io/projected/9fa2300d-0b2c-4e30-afb5-882b5e38841f-kube-api-access-v4fgx\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.260757 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.260765 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.280101 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data" (OuterVolumeSpecName: "config-data") pod "9fa2300d-0b2c-4e30-afb5-882b5e38841f" (UID: "9fa2300d-0b2c-4e30-afb5-882b5e38841f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.363077 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fa2300d-0b2c-4e30-afb5-882b5e38841f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.657807 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9fa2300d-0b2c-4e30-afb5-882b5e38841f","Type":"ContainerDied","Data":"894786e514c28a114f9ca72e9ef45f8779fedbbeb70799681e2bc00affa1ed85"} Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.657878 4712 scope.go:117] "RemoveContainer" containerID="98e43717ee2a14345efea7fcf142c12433087f80ad2c6664ba7ccaea5bd44ff4" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.657897 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.726333 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.736918 4712 scope.go:117] "RemoveContainer" containerID="810467ba8b1220aefae522c943dae8124ff7f6181b7f0bb70f36c8a010508e72" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.738106 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767195 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767674 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerName="dnsmasq-dns" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767697 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerName="dnsmasq-dns" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767720 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-log" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767728 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-log" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767781 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="probe" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767790 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="probe" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767824 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="cinder-scheduler" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767832 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="cinder-scheduler" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767857 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" containerName="init" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767865 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" containerName="init" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767875 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerName="init" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767883 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerName="init" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767899 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" containerName="dnsmasq-dns" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767909 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" containerName="dnsmasq-dns" Jan 30 17:17:13 crc kubenswrapper[4712]: E0130 17:17:13.767921 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-api" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.767928 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-api" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.768357 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c5c55ed2-b2de-42e8-865c-81436c478565" containerName="dnsmasq-dns" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.768382 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-log" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.768398 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c944942-c975-4bd5-b6e5-8199b95609a7" containerName="dnsmasq-dns" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.768421 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="probe" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.768439 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f107ebd6-3359-4995-9a79-70e9719bbbf2" containerName="placement-api" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.768448 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" containerName="cinder-scheduler" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.769660 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.774027 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.831126 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fa2300d-0b2c-4e30-afb5-882b5e38841f" path="/var/lib/kubelet/pods/9fa2300d-0b2c-4e30-afb5-882b5e38841f/volumes" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.839552 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.873897 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.873978 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spmp8\" (UniqueName: \"kubernetes.io/projected/6e0d9187-34f3-4d93-a189-264ff4cc933d-kube-api-access-spmp8\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.874028 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.874049 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.874078 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e0d9187-34f3-4d93-a189-264ff4cc933d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.874123 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.975982 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.976076 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spmp8\" (UniqueName: \"kubernetes.io/projected/6e0d9187-34f3-4d93-a189-264ff4cc933d-kube-api-access-spmp8\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.976133 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.976156 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.976191 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e0d9187-34f3-4d93-a189-264ff4cc933d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.976246 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.979535 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.979629 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/6e0d9187-34f3-4d93-a189-264ff4cc933d-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.981430 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.983329 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-scripts\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:13 crc kubenswrapper[4712]: I0130 17:17:13.983681 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e0d9187-34f3-4d93-a189-264ff4cc933d-config-data\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:14 crc kubenswrapper[4712]: I0130 17:17:14.025028 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spmp8\" (UniqueName: \"kubernetes.io/projected/6e0d9187-34f3-4d93-a189-264ff4cc933d-kube-api-access-spmp8\") pod \"cinder-scheduler-0\" (UID: \"6e0d9187-34f3-4d93-a189-264ff4cc933d\") " pod="openstack/cinder-scheduler-0" Jan 30 17:17:14 crc kubenswrapper[4712]: I0130 17:17:14.089042 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 30 17:17:14 crc kubenswrapper[4712]: W0130 17:17:14.674443 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e0d9187_34f3_4d93_a189_264ff4cc933d.slice/crio-c8ba49eefef9a2b43c8edfe064af3754353edb15a3ae498a1a398735aca4c811 WatchSource:0}: Error finding container c8ba49eefef9a2b43c8edfe064af3754353edb15a3ae498a1a398735aca4c811: Status 404 returned error can't find the container with id c8ba49eefef9a2b43c8edfe064af3754353edb15a3ae498a1a398735aca4c811 Jan 30 17:17:14 crc kubenswrapper[4712]: I0130 17:17:14.697140 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 30 17:17:14 crc kubenswrapper[4712]: I0130 17:17:14.699989 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.175:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.182569 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.183665 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.192042 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-5m5x9" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.192555 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.192755 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.212213 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.313135 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmbvp\" (UniqueName: \"kubernetes.io/projected/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-kube-api-access-vmbvp\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.313214 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.313269 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-openstack-config\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.313299 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-openstack-config-secret\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.415377 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-openstack-config\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.415468 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-openstack-config-secret\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.415650 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vmbvp\" (UniqueName: \"kubernetes.io/projected/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-kube-api-access-vmbvp\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.415712 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.417766 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-openstack-config\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.446643 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-combined-ca-bundle\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.450473 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-openstack-config-secret\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.460412 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vmbvp\" (UniqueName: \"kubernetes.io/projected/ca2a20bb-6a1a-4d8e-8f87-6478ac901d09-kube-api-access-vmbvp\") pod \"openstackclient\" (UID: \"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09\") " pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.511682 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.644266 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": dial tcp 10.217.0.169:9311: connect: connection refused" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.644478 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-5cc65645c4-8p2m2" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.169:9311/healthcheck\": dial tcp 10.217.0.169:9311: connect: connection refused" Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.740246 4712 generic.go:334] "Generic (PLEG): container finished" podID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerID="92a634e61c6e87c7ca6cab19cb6cb0f636e8094e309369c1d2e6b244d0b6fd5b" exitCode=0 Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.740335 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc65645c4-8p2m2" event={"ID":"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e","Type":"ContainerDied","Data":"92a634e61c6e87c7ca6cab19cb6cb0f636e8094e309369c1d2e6b244d0b6fd5b"} Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.752608 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e0d9187-34f3-4d93-a189-264ff4cc933d","Type":"ContainerStarted","Data":"48766373bca0fcf88feabac1d8e74a83dc6fa5e41bb6cf3b2dca237131c2c4bb"} Jan 30 17:17:15 crc kubenswrapper[4712]: I0130 17:17:15.752670 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e0d9187-34f3-4d93-a189-264ff4cc933d","Type":"ContainerStarted","Data":"c8ba49eefef9a2b43c8edfe064af3754353edb15a3ae498a1a398735aca4c811"} Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.311603 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.312339 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:17:16 crc kubenswrapper[4712]: W0130 17:17:16.340135 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podca2a20bb_6a1a_4d8e_8f87_6478ac901d09.slice/crio-04cb2aa85408b6ffa176668632432ca81166d5c9895c9d3e7600adc9eb70a448 WatchSource:0}: Error finding container 04cb2aa85408b6ffa176668632432ca81166d5c9895c9d3e7600adc9eb70a448: Status 404 returned error can't find the container with id 04cb2aa85408b6ffa176668632432ca81166d5c9895c9d3e7600adc9eb70a448 Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.441585 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.444907 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z47c\" (UniqueName: \"kubernetes.io/projected/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-kube-api-access-9z47c\") pod \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.445114 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data\") pod \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.445242 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-logs\") pod \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.445336 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data-custom\") pod \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.445464 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-combined-ca-bundle\") pod \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\" (UID: \"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e\") " Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.448221 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-logs" (OuterVolumeSpecName: "logs") pod "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" (UID: "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.477225 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-kube-api-access-9z47c" (OuterVolumeSpecName: "kube-api-access-9z47c") pod "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" (UID: "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e"). InnerVolumeSpecName "kube-api-access-9z47c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.478940 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" (UID: "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.527458 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" (UID: "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.557013 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.557053 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.557064 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.557072 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z47c\" (UniqueName: \"kubernetes.io/projected/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-kube-api-access-9z47c\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.606512 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data" (OuterVolumeSpecName: "config-data") pod "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" (UID: "9ad2cb18-dfc8-45eb-9d27-22df3af4e84e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.660948 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.661021 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.745327 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bcf445ccb-bcbn6"] Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.745819 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bcf445ccb-bcbn6" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-api" containerID="cri-o://88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3" gracePeriod=30 Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.745946 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5bcf445ccb-bcbn6" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-httpd" containerID="cri-o://5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0" gracePeriod=30 Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.788357 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09","Type":"ContainerStarted","Data":"04cb2aa85408b6ffa176668632432ca81166d5c9895c9d3e7600adc9eb70a448"} Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.796969 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-5cc65645c4-8p2m2" event={"ID":"9ad2cb18-dfc8-45eb-9d27-22df3af4e84e","Type":"ContainerDied","Data":"0d1f8481245ab0cf13c86726ae9e13ad9bce9e5a12320f893f6c1f35bec39617"} Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.797019 4712 scope.go:117] "RemoveContainer" containerID="92a634e61c6e87c7ca6cab19cb6cb0f636e8094e309369c1d2e6b244d0b6fd5b" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.797067 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-5cc65645c4-8p2m2" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.828161 4712 scope.go:117] "RemoveContainer" containerID="ee7b3c100b56e6f5861ab1740fcbf2da2866e0589a8a90b38916b8dd8867d9e2" Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.849894 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-5cc65645c4-8p2m2"] Jan 30 17:17:16 crc kubenswrapper[4712]: I0130 17:17:16.867616 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-5cc65645c4-8p2m2"] Jan 30 17:17:17 crc kubenswrapper[4712]: I0130 17:17:17.819578 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" path="/var/lib/kubelet/pods/9ad2cb18-dfc8-45eb-9d27-22df3af4e84e/volumes" Jan 30 17:17:17 crc kubenswrapper[4712]: I0130 17:17:17.826946 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e0d9187-34f3-4d93-a189-264ff4cc933d","Type":"ContainerStarted","Data":"1819b46e0b160976c5478df4b624954c40f59bef760d46c92298232bdcf9d96d"} Jan 30 17:17:17 crc kubenswrapper[4712]: I0130 17:17:17.835809 4712 generic.go:334] "Generic (PLEG): container finished" podID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerID="5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0" exitCode=0 Jan 30 17:17:17 crc kubenswrapper[4712]: I0130 17:17:17.835976 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bcf445ccb-bcbn6" event={"ID":"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b","Type":"ContainerDied","Data":"5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0"} Jan 30 17:17:18 crc kubenswrapper[4712]: I0130 17:17:18.966401 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 17:17:18 crc kubenswrapper[4712]: I0130 17:17:18.990893 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.990871188 podStartE2EDuration="5.990871188s" podCreationTimestamp="2026-01-30 17:17:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:17.853347915 +0000 UTC m=+1374.760357384" watchObservedRunningTime="2026-01-30 17:17:18.990871188 +0000 UTC m=+1375.897880647" Jan 30 17:17:19 crc kubenswrapper[4712]: I0130 17:17:19.089964 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.820626 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.908606 4712 generic.go:334] "Generic (PLEG): container finished" podID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerID="88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3" exitCode=0 Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.909095 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bcf445ccb-bcbn6" event={"ID":"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b","Type":"ContainerDied","Data":"88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3"} Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.909252 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5bcf445ccb-bcbn6" event={"ID":"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b","Type":"ContainerDied","Data":"ecc726bb15d350b09e9766eb9fee3af8422326ba365b84fcf1c99a22b6f1af61"} Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.909348 4712 scope.go:117] "RemoveContainer" containerID="5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0" Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.909658 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5bcf445ccb-bcbn6" Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.947747 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-ovndb-tls-certs\") pod \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.948079 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rtb8z\" (UniqueName: \"kubernetes.io/projected/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-kube-api-access-rtb8z\") pod \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.948193 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-config\") pod \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.948242 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-combined-ca-bundle\") pod \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.948296 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-httpd-config\") pod \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\" (UID: \"c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b\") " Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.977969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-kube-api-access-rtb8z" (OuterVolumeSpecName: "kube-api-access-rtb8z") pod "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" (UID: "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b"). InnerVolumeSpecName "kube-api-access-rtb8z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:22 crc kubenswrapper[4712]: I0130 17:17:22.981192 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" (UID: "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.001442 4712 scope.go:117] "RemoveContainer" containerID="88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.063186 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.063561 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rtb8z\" (UniqueName: \"kubernetes.io/projected/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-kube-api-access-rtb8z\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.074553 4712 scope.go:117] "RemoveContainer" containerID="5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0" Jan 30 17:17:23 crc kubenswrapper[4712]: E0130 17:17:23.075907 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0\": container with ID starting with 5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0 not found: ID does not exist" containerID="5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.075942 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0"} err="failed to get container status \"5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0\": rpc error: code = NotFound desc = could not find container \"5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0\": container with ID starting with 5569d4136fe9d7d63fe0aa52a47ba16eaf29d4606753c61d2635db34b801a7e0 not found: ID does not exist" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.075965 4712 scope.go:117] "RemoveContainer" containerID="88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3" Jan 30 17:17:23 crc kubenswrapper[4712]: E0130 17:17:23.076434 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3\": container with ID starting with 88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3 not found: ID does not exist" containerID="88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.076461 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3"} err="failed to get container status \"88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3\": rpc error: code = NotFound desc = could not find container \"88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3\": container with ID starting with 88aca9e4f92f59995481b2d64a1bb5e8750bec48d34eb3ac7b788b8fd9b8ffa3 not found: ID does not exist" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.096875 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" (UID: "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.096967 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-config" (OuterVolumeSpecName: "config") pod "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" (UID: "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.108557 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" (UID: "c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.165300 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.165351 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.165364 4712 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.255118 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5bcf445ccb-bcbn6"] Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.271527 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5bcf445ccb-bcbn6"] Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.823290 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" path="/var/lib/kubelet/pods/c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b/volumes" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.951490 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-75595bd865-rqm2l"] Jan 30 17:17:23 crc kubenswrapper[4712]: E0130 17:17:23.951821 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-httpd" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.951837 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-httpd" Jan 30 17:17:23 crc kubenswrapper[4712]: E0130 17:17:23.951871 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.951879 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" Jan 30 17:17:23 crc kubenswrapper[4712]: E0130 17:17:23.951891 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-api" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.951896 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-api" Jan 30 17:17:23 crc kubenswrapper[4712]: E0130 17:17:23.951906 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.951911 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.952090 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.952116 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-api" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.952125 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ad2cb18-dfc8-45eb-9d27-22df3af4e84e" containerName="barbican-api-log" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.952135 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c2de05-6f17-4e4d-8d9f-98fe68a96f3b" containerName="neutron-httpd" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.952667 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.956105 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.958234 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.962178 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-stb44" Jan 30 17:17:23 crc kubenswrapper[4712]: I0130 17:17:23.970530 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75595bd865-rqm2l"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.088821 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzvd\" (UniqueName: \"kubernetes.io/projected/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-kube-api-access-cjzvd\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.088970 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.089020 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-combined-ca-bundle\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.089082 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data-custom\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.181479 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-2fsl2"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.184731 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.192752 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-combined-ca-bundle\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.192849 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data-custom\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.192917 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjzvd\" (UniqueName: \"kubernetes.io/projected/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-kube-api-access-cjzvd\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.192978 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.200580 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-combined-ca-bundle\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.203232 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.212954 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data-custom\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.225548 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-2fsl2"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.235146 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjzvd\" (UniqueName: \"kubernetes.io/projected/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-kube-api-access-cjzvd\") pod \"heat-engine-75595bd865-rqm2l\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.274308 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.295846 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-config\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.296003 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.296063 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqsqd\" (UniqueName: \"kubernetes.io/projected/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-kube-api-access-pqsqd\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.296171 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.296213 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.296243 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.305070 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-5cfd5b7746-whcck"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.306378 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.312146 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.367605 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-7ff85c4bb5-kfdkk"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.368728 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.375839 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403378 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t9j6f\" (UniqueName: \"kubernetes.io/projected/0f8a0938-d2f2-47bc-b923-fdcba236851f-kube-api-access-t9j6f\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403438 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqsqd\" (UniqueName: \"kubernetes.io/projected/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-kube-api-access-pqsqd\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403476 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403516 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403544 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403564 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data-custom\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403583 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-combined-ca-bundle\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403601 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-config\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403640 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.403686 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.404455 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-svc\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.406401 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-nb\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.406433 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-sb\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.407446 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-swift-storage-0\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.415572 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-config\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.422789 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-7ff85c4bb5-kfdkk"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.457235 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqsqd\" (UniqueName: \"kubernetes.io/projected/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-kube-api-access-pqsqd\") pod \"dnsmasq-dns-7756b9d78c-2fsl2\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.496342 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-5cfd5b7746-whcck"] Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508127 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data-custom\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508182 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-combined-ca-bundle\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508205 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-combined-ca-bundle\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508244 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508294 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2r6k\" (UniqueName: \"kubernetes.io/projected/3199e2b6-4450-48fb-9809-3467dce0d5bd-kube-api-access-s2r6k\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508331 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t9j6f\" (UniqueName: \"kubernetes.io/projected/0f8a0938-d2f2-47bc-b923-fdcba236851f-kube-api-access-t9j6f\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508369 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data-custom\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.508430 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.522173 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data-custom\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.524038 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-combined-ca-bundle\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.548514 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t9j6f\" (UniqueName: \"kubernetes.io/projected/0f8a0938-d2f2-47bc-b923-fdcba236851f-kube-api-access-t9j6f\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.548976 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data\") pod \"heat-cfnapi-5cfd5b7746-whcck\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.612465 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data-custom\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.612554 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.612604 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-combined-ca-bundle\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.612673 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2r6k\" (UniqueName: \"kubernetes.io/projected/3199e2b6-4450-48fb-9809-3467dce0d5bd-kube-api-access-s2r6k\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.616962 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data-custom\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.622030 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-combined-ca-bundle\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.629806 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2r6k\" (UniqueName: \"kubernetes.io/projected/3199e2b6-4450-48fb-9809-3467dce0d5bd-kube-api-access-s2r6k\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.630702 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data\") pod \"heat-api-7ff85c4bb5-kfdkk\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.635952 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.667595 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.715335 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:24 crc kubenswrapper[4712]: I0130 17:17:24.847336 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 17:17:25 crc kubenswrapper[4712]: I0130 17:17:25.970418 4712 generic.go:334] "Generic (PLEG): container finished" podID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerID="0637c6cf8b9543ce9d09aa9b237dd18cd14c4de10f84d30d44b4a331a3589fa8" exitCode=137 Jan 30 17:17:25 crc kubenswrapper[4712]: I0130 17:17:25.970497 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerDied","Data":"0637c6cf8b9543ce9d09aa9b237dd18cd14c4de10f84d30d44b4a331a3589fa8"} Jan 30 17:17:25 crc kubenswrapper[4712]: I0130 17:17:25.980483 4712 generic.go:334] "Generic (PLEG): container finished" podID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerID="ca8d05a9668753b2823d10544b8f8bbf3f28554634a29614ced82a2e411f15e2" exitCode=137 Jan 30 17:17:25 crc kubenswrapper[4712]: I0130 17:17:25.980548 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"ca8d05a9668753b2823d10544b8f8bbf3f28554634a29614ced82a2e411f15e2"} Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.492083 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.494093 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-central-agent" containerID="cri-o://2abff2a39f69c92d6b6f1a7bd3de162fe1a94708d72b57a74c331880b4618230" gracePeriod=30 Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.494453 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="proxy-httpd" containerID="cri-o://41bb890082e2894c9e3d503a74b8fafda69c11b38b44180f090ea29485338140" gracePeriod=30 Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.494508 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="sg-core" containerID="cri-o://7747f5be190ec75eb1e9bd4b2e5287e50b0b7f3283a8928f3616bcdef7e41c73" gracePeriod=30 Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.494539 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-notification-agent" containerID="cri-o://4e80187a3b6c9283da731ffe5a293d4662eca7d098dad2dcd88a859869314be1" gracePeriod=30 Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.995913 4712 generic.go:334] "Generic (PLEG): container finished" podID="3770729e-1882-447d-bc3f-46413301437f" containerID="41bb890082e2894c9e3d503a74b8fafda69c11b38b44180f090ea29485338140" exitCode=0 Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.995945 4712 generic.go:334] "Generic (PLEG): container finished" podID="3770729e-1882-447d-bc3f-46413301437f" containerID="7747f5be190ec75eb1e9bd4b2e5287e50b0b7f3283a8928f3616bcdef7e41c73" exitCode=2 Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.995964 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerDied","Data":"41bb890082e2894c9e3d503a74b8fafda69c11b38b44180f090ea29485338140"} Jan 30 17:17:26 crc kubenswrapper[4712]: I0130 17:17:26.995990 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerDied","Data":"7747f5be190ec75eb1e9bd4b2e5287e50b0b7f3283a8928f3616bcdef7e41c73"} Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.710581 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-7f9b7fd987-g2xkh"] Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.713216 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.721137 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.721383 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.736383 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f9b7fd987-g2xkh"] Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.741996 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.788779 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-public-tls-certs\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.788855 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cad21e9-9d68-4f77-820b-0c1641e81e72-run-httpd\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.788878 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-config-data\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.789028 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-combined-ca-bundle\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.789152 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0cad21e9-9d68-4f77-820b-0c1641e81e72-etc-swift\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.789188 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-internal-tls-certs\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.789268 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnfk6\" (UniqueName: \"kubernetes.io/projected/0cad21e9-9d68-4f77-820b-0c1641e81e72-kube-api-access-cnfk6\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.789423 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cad21e9-9d68-4f77-820b-0c1641e81e72-log-httpd\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891682 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-combined-ca-bundle\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891761 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0cad21e9-9d68-4f77-820b-0c1641e81e72-etc-swift\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891785 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-internal-tls-certs\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891845 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cnfk6\" (UniqueName: \"kubernetes.io/projected/0cad21e9-9d68-4f77-820b-0c1641e81e72-kube-api-access-cnfk6\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891871 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cad21e9-9d68-4f77-820b-0c1641e81e72-log-httpd\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891892 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-public-tls-certs\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891912 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cad21e9-9d68-4f77-820b-0c1641e81e72-run-httpd\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.891927 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-config-data\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.896252 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cad21e9-9d68-4f77-820b-0c1641e81e72-run-httpd\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.896363 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/0cad21e9-9d68-4f77-820b-0c1641e81e72-log-httpd\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.899136 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-config-data\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.902842 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-public-tls-certs\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.921407 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-internal-tls-certs\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.924812 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0cad21e9-9d68-4f77-820b-0c1641e81e72-combined-ca-bundle\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.926825 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/0cad21e9-9d68-4f77-820b-0c1641e81e72-etc-swift\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:27 crc kubenswrapper[4712]: I0130 17:17:27.932564 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cnfk6\" (UniqueName: \"kubernetes.io/projected/0cad21e9-9d68-4f77-820b-0c1641e81e72-kube-api-access-cnfk6\") pod \"swift-proxy-7f9b7fd987-g2xkh\" (UID: \"0cad21e9-9d68-4f77-820b-0c1641e81e72\") " pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:28 crc kubenswrapper[4712]: I0130 17:17:28.035565 4712 generic.go:334] "Generic (PLEG): container finished" podID="3770729e-1882-447d-bc3f-46413301437f" containerID="2abff2a39f69c92d6b6f1a7bd3de162fe1a94708d72b57a74c331880b4618230" exitCode=0 Jan 30 17:17:28 crc kubenswrapper[4712]: I0130 17:17:28.035606 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerDied","Data":"2abff2a39f69c92d6b6f1a7bd3de162fe1a94708d72b57a74c331880b4618230"} Jan 30 17:17:28 crc kubenswrapper[4712]: I0130 17:17:28.058617 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:29 crc kubenswrapper[4712]: I0130 17:17:29.054689 4712 generic.go:334] "Generic (PLEG): container finished" podID="3770729e-1882-447d-bc3f-46413301437f" containerID="4e80187a3b6c9283da731ffe5a293d4662eca7d098dad2dcd88a859869314be1" exitCode=0 Jan 30 17:17:29 crc kubenswrapper[4712]: I0130 17:17:29.055111 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerDied","Data":"4e80187a3b6c9283da731ffe5a293d4662eca7d098dad2dcd88a859869314be1"} Jan 30 17:17:29 crc kubenswrapper[4712]: I0130 17:17:29.606392 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.175:8776/healthcheck\": dial tcp 10.217.0.175:8776: connect: connection refused" Jan 30 17:17:30 crc kubenswrapper[4712]: I0130 17:17:30.070828 4712 generic.go:334] "Generic (PLEG): container finished" podID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerID="8cdc8ab31dc840103fc05752760fda44401efb085f84ebbc15d25e880264c843" exitCode=137 Jan 30 17:17:30 crc kubenswrapper[4712]: I0130 17:17:30.071171 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1ee0a8fb-a77e-4786-9ba2-93805c9cb272","Type":"ContainerDied","Data":"8cdc8ab31dc840103fc05752760fda44401efb085f84ebbc15d25e880264c843"} Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.302381 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-778dc6dbc4-rwjl5"] Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.303886 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.329513 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-68c577d787-bljqj"] Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.330697 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.363614 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6kz\" (UniqueName: \"kubernetes.io/projected/d6013c6b-ae4f-4632-917e-672f5a538653-kube-api-access-hc6kz\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.364064 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data-custom\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.364304 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-combined-ca-bundle\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.363650 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-778dc6dbc4-rwjl5"] Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.364559 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.430869 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-56f4484db-n2zkj"] Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.432420 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.436406 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68c577d787-bljqj"] Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470453 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6kz\" (UniqueName: \"kubernetes.io/projected/d6013c6b-ae4f-4632-917e-672f5a538653-kube-api-access-hc6kz\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470535 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-config-data-custom\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470563 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data-custom\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470594 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-combined-ca-bundle\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470652 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-config-data\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470701 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8qq4\" (UniqueName: \"kubernetes.io/projected/b9473151-e9e1-4388-8134-fb8fd45d0257-kube-api-access-x8qq4\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470777 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-combined-ca-bundle\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.470882 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.481375 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.488468 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data-custom\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.488504 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-combined-ca-bundle\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.490307 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-56f4484db-n2zkj"] Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.531744 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6kz\" (UniqueName: \"kubernetes.io/projected/d6013c6b-ae4f-4632-917e-672f5a538653-kube-api-access-hc6kz\") pod \"heat-cfnapi-778dc6dbc4-rwjl5\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.573877 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-config-data\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.573943 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8qq4\" (UniqueName: \"kubernetes.io/projected/b9473151-e9e1-4388-8134-fb8fd45d0257-kube-api-access-x8qq4\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.573973 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb67p\" (UniqueName: \"kubernetes.io/projected/03d2e846-1967-4fce-8926-929318331866-kube-api-access-rb67p\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.573992 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.574077 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data-custom\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.574117 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-config-data-custom\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.574137 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-combined-ca-bundle\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.574155 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-combined-ca-bundle\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.579032 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-combined-ca-bundle\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.589309 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-config-data\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.602922 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8qq4\" (UniqueName: \"kubernetes.io/projected/b9473151-e9e1-4388-8134-fb8fd45d0257-kube-api-access-x8qq4\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.611683 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b9473151-e9e1-4388-8134-fb8fd45d0257-config-data-custom\") pod \"heat-engine-68c577d787-bljqj\" (UID: \"b9473151-e9e1-4388-8134-fb8fd45d0257\") " pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.628400 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.657016 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.676043 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data-custom\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.676136 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-combined-ca-bundle\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.676227 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb67p\" (UniqueName: \"kubernetes.io/projected/03d2e846-1967-4fce-8926-929318331866-kube-api-access-rb67p\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.676257 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.680543 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-combined-ca-bundle\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.688660 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.701571 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data-custom\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.715411 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb67p\" (UniqueName: \"kubernetes.io/projected/03d2e846-1967-4fce-8926-929318331866-kube-api-access-rb67p\") pod \"heat-api-56f4484db-n2zkj\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:31 crc kubenswrapper[4712]: I0130 17:17:31.754379 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:33 crc kubenswrapper[4712]: E0130 17:17:33.511250 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 30 17:17:33 crc kubenswrapper[4712]: E0130 17:17:33.511821 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n55bh65hc8h675h84h656hddh54ch595h66fh66dh58bhd9h5f8h545h59bhfdh589h5bbhc4hb5h568h5f8h56ch585h5f6h56h6bh564hcbhfbhccq,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmbvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(ca2a20bb-6a1a-4d8e-8f87-6478ac901d09): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:17:33 crc kubenswrapper[4712]: E0130 17:17:33.513038 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="ca2a20bb-6a1a-4d8e-8f87-6478ac901d09" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.597023 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ff85c4bb5-kfdkk"] Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.646763 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-679854b776-gmq67"] Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.648135 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.652479 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.652646 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.659520 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-679854b776-gmq67"] Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.728676 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-combined-ca-bundle\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.728716 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-config-data-custom\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.728828 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-internal-tls-certs\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.728875 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-config-data\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.728912 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx5mn\" (UniqueName: \"kubernetes.io/projected/6c3a1401-04c4-419c-98dc-23ca889b391a-kube-api-access-xx5mn\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.728943 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-public-tls-certs\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.740173 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5cfd5b7746-whcck"] Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.777921 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-64f88d7685-rpkd8"] Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.779130 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.786919 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.787766 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833137 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vg2gb\" (UniqueName: \"kubernetes.io/projected/e18788f5-d1c7-435c-a619-784ddb7bdb56-kube-api-access-vg2gb\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833187 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-combined-ca-bundle\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833209 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-config-data-custom\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833228 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-internal-tls-certs\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833302 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-internal-tls-certs\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833321 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-combined-ca-bundle\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833358 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-config-data\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833383 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-config-data\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833434 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xx5mn\" (UniqueName: \"kubernetes.io/projected/6c3a1401-04c4-419c-98dc-23ca889b391a-kube-api-access-xx5mn\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833482 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-public-tls-certs\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833514 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-public-tls-certs\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.833535 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-config-data-custom\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.845251 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64f88d7685-rpkd8"] Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.850952 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-internal-tls-certs\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.863677 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx5mn\" (UniqueName: \"kubernetes.io/projected/6c3a1401-04c4-419c-98dc-23ca889b391a-kube-api-access-xx5mn\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.868920 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-combined-ca-bundle\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.869559 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-config-data-custom\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.881655 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-config-data\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.889314 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6c3a1401-04c4-419c-98dc-23ca889b391a-public-tls-certs\") pod \"heat-api-679854b776-gmq67\" (UID: \"6c3a1401-04c4-419c-98dc-23ca889b391a\") " pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.936277 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-public-tls-certs\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.936321 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-config-data-custom\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.936394 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vg2gb\" (UniqueName: \"kubernetes.io/projected/e18788f5-d1c7-435c-a619-784ddb7bdb56-kube-api-access-vg2gb\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.936424 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-internal-tls-certs\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.936485 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-combined-ca-bundle\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.936513 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-config-data\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.943498 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-internal-tls-certs\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.945175 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-combined-ca-bundle\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.953362 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-config-data-custom\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.970306 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-config-data\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.971421 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e18788f5-d1c7-435c-a619-784ddb7bdb56-public-tls-certs\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.974350 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vg2gb\" (UniqueName: \"kubernetes.io/projected/e18788f5-d1c7-435c-a619-784ddb7bdb56-kube-api-access-vg2gb\") pod \"heat-cfnapi-64f88d7685-rpkd8\" (UID: \"e18788f5-d1c7-435c-a619-784ddb7bdb56\") " pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:33 crc kubenswrapper[4712]: I0130 17:17:33.995746 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.121724 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.162613 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3770729e-1882-447d-bc3f-46413301437f","Type":"ContainerDied","Data":"7389f7c91301c14df17b6eb9ca04b48255ec5603180c540b768559fcaead26f8"} Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.162653 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7389f7c91301c14df17b6eb9ca04b48255ec5603180c540b768559fcaead26f8" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.162869 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.249440 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-combined-ca-bundle\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.249821 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-config-data\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.249916 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-run-httpd\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.250078 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-sg-core-conf-yaml\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.250099 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-log-httpd\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.250125 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-scripts\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.250173 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8vnb\" (UniqueName: \"kubernetes.io/projected/3770729e-1882-447d-bc3f-46413301437f-kube-api-access-m8vnb\") pod \"3770729e-1882-447d-bc3f-46413301437f\" (UID: \"3770729e-1882-447d-bc3f-46413301437f\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.258033 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3770729e-1882-447d-bc3f-46413301437f-kube-api-access-m8vnb" (OuterVolumeSpecName: "kube-api-access-m8vnb") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "kube-api-access-m8vnb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.258438 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: E0130 17:17:34.261403 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="ca2a20bb-6a1a-4d8e-8f87-6478ac901d09" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.261973 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.286546 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-scripts" (OuterVolumeSpecName: "scripts") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.349081 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.352815 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.352839 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.352848 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3770729e-1882-447d-bc3f-46413301437f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.352857 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.352865 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8vnb\" (UniqueName: \"kubernetes.io/projected/3770729e-1882-447d-bc3f-46413301437f-kube-api-access-m8vnb\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.540639 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.572151 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.660948 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-config-data" (OuterVolumeSpecName: "config-data") pod "3770729e-1882-447d-bc3f-46413301437f" (UID: "3770729e-1882-447d-bc3f-46413301437f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.680110 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3770729e-1882-447d-bc3f-46413301437f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.819505 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5cfd5b7746-whcck"] Jan 30 17:17:34 crc kubenswrapper[4712]: W0130 17:17:34.888889 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8a0938_d2f2_47bc_b923_fdcba236851f.slice/crio-a20f6094d4f98f0da0799020de8b32f52bf40c1d427f819f92a274b432131991 WatchSource:0}: Error finding container a20f6094d4f98f0da0799020de8b32f52bf40c1d427f819f92a274b432131991: Status 404 returned error can't find the container with id a20f6094d4f98f0da0799020de8b32f52bf40c1d427f819f92a274b432131991 Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.972336 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985103 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985189 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-combined-ca-bundle\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985257 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbb8r\" (UniqueName: \"kubernetes.io/projected/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-kube-api-access-vbb8r\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985290 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-scripts\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985343 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-logs\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985465 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-etc-machine-id\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:34 crc kubenswrapper[4712]: I0130 17:17:34.985540 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data-custom\") pod \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\" (UID: \"1ee0a8fb-a77e-4786-9ba2-93805c9cb272\") " Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.007446 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.011569 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-logs" (OuterVolumeSpecName: "logs") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.027765 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-kube-api-access-vbb8r" (OuterVolumeSpecName: "kube-api-access-vbb8r") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "kube-api-access-vbb8r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.068019 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-scripts" (OuterVolumeSpecName: "scripts") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.082300 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.089653 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.089685 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vbb8r\" (UniqueName: \"kubernetes.io/projected/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-kube-api-access-vbb8r\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.089695 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.089705 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.089717 4712 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.110130 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data" (OuterVolumeSpecName: "config-data") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.131109 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1ee0a8fb-a77e-4786-9ba2-93805c9cb272" (UID: "1ee0a8fb-a77e-4786-9ba2-93805c9cb272"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.185779 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" event={"ID":"0f8a0938-d2f2-47bc-b923-fdcba236851f","Type":"ContainerStarted","Data":"a20f6094d4f98f0da0799020de8b32f52bf40c1d427f819f92a274b432131991"} Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.187352 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"1ee0a8fb-a77e-4786-9ba2-93805c9cb272","Type":"ContainerDied","Data":"23d62501cd188a2342cde77fb5952d029a66d62c2537913e65832cd50e35c010"} Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.187379 4712 scope.go:117] "RemoveContainer" containerID="8cdc8ab31dc840103fc05752760fda44401efb085f84ebbc15d25e880264c843" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.187505 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.192259 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.192293 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1ee0a8fb-a77e-4786-9ba2-93805c9cb272-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.208560 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.212072 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213"} Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.360613 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.360662 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.366921 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-68c577d787-bljqj"] Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.378195 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-2fsl2"] Jan 30 17:17:35 crc kubenswrapper[4712]: I0130 17:17:35.626519 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-7f9b7fd987-g2xkh"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:35.678601 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-75595bd865-rqm2l"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:35.778609 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-679854b776-gmq67"] Jan 30 17:17:36 crc kubenswrapper[4712]: W0130 17:17:36.043096 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb9473151_e9e1_4388_8134_fb8fd45d0257.slice/crio-f718ebfab6ec1c18f6e159d517a13aec32a1d2a20917be3641b9ab21cbb45c79 WatchSource:0}: Error finding container f718ebfab6ec1c18f6e159d517a13aec32a1d2a20917be3641b9ab21cbb45c79: Status 404 returned error can't find the container with id f718ebfab6ec1c18f6e159d517a13aec32a1d2a20917be3641b9ab21cbb45c79 Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.065335 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-56f4484db-n2zkj"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.065622 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-778dc6dbc4-rwjl5"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.065765 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ff85c4bb5-kfdkk"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.065893 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-64f88d7685-rpkd8"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.130287 4712 scope.go:117] "RemoveContainer" containerID="2f3c54c9fb87787b4768830d81e564dabc1e305c8bc10f75eae515275e66a603" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.244966 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" event={"ID":"7e92eef8-fc7a-4b92-8a68-95d37b647aa4","Type":"ContainerStarted","Data":"c62319af760d74be096684984ea62d2aae63effcf06380ed22977c7640d3b728"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.258847 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56f4484db-n2zkj" event={"ID":"03d2e846-1967-4fce-8926-929318331866","Type":"ContainerStarted","Data":"ad1c5c1aa8972488196182137dda068b2a81767fd8207d1ef43c3cac680594d3"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.265016 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ff85c4bb5-kfdkk" event={"ID":"3199e2b6-4450-48fb-9809-3467dce0d5bd","Type":"ContainerStarted","Data":"acd8dcc19b33fc41c3cc2ab9551581e288e4779552da946d1af89a49055334fb"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.270638 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.270698 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.288314 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" event={"ID":"0cad21e9-9d68-4f77-820b-0c1641e81e72","Type":"ContainerStarted","Data":"012095a227a03e6e6ed656b1a48888beff2e8d55774e1d345b0fc08c41839c1b"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.298042 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c577d787-bljqj" event={"ID":"b9473151-e9e1-4388-8134-fb8fd45d0257","Type":"ContainerStarted","Data":"f718ebfab6ec1c18f6e159d517a13aec32a1d2a20917be3641b9ab21cbb45c79"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.306383 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-679854b776-gmq67" event={"ID":"6c3a1401-04c4-419c-98dc-23ca889b391a","Type":"ContainerStarted","Data":"63eae569bf5aa59514fb68e48e275d93c8ebd4ae80af9000d4826b9d9e41cd5a"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.326886 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"8b23f706dbf8aa6538b8c9a023bfa2c07b9d28b0f58e8e9342cd27572ba0c0d2"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.332346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" event={"ID":"e18788f5-d1c7-435c-a619-784ddb7bdb56","Type":"ContainerStarted","Data":"0f00a47cc4fe30a365ffad68be955a3ccffdf03704f973494866bd0bfcd7fc9f"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.338225 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" event={"ID":"d6013c6b-ae4f-4632-917e-672f5a538653","Type":"ContainerStarted","Data":"64ffcad49d4a9133243885272453e6ad35d4cedc59d71737f9883736ff4680d5"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.361776 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75595bd865-rqm2l" event={"ID":"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9","Type":"ContainerStarted","Data":"dbeb4ddeb91abdeac4c39af5a3cb64d139629ea3eb31d665f36b826982aaff07"} Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.503591 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.524862 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.540776 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: E0130 17:17:36.541647 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-notification-agent" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.541675 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-notification-agent" Jan 30 17:17:36 crc kubenswrapper[4712]: E0130 17:17:36.541695 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="proxy-httpd" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.541704 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="proxy-httpd" Jan 30 17:17:36 crc kubenswrapper[4712]: E0130 17:17:36.541725 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.541733 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" Jan 30 17:17:36 crc kubenswrapper[4712]: E0130 17:17:36.541749 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-central-agent" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.541757 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-central-agent" Jan 30 17:17:36 crc kubenswrapper[4712]: E0130 17:17:36.541780 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="sg-core" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.541788 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="sg-core" Jan 30 17:17:36 crc kubenswrapper[4712]: E0130 17:17:36.541827 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api-log" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.541835 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api-log" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.542078 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api-log" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.542103 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-notification-agent" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.542121 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="ceilometer-central-agent" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.542135 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.542148 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="sg-core" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.542163 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3770729e-1882-447d-bc3f-46413301437f" containerName="proxy-httpd" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.543463 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.557978 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.558165 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.558227 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.575490 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.596074 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.596867 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.596974 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597241 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-config-data\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597290 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-scripts\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597372 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597434 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-config-data-custom\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597572 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tjbz\" (UniqueName: \"kubernetes.io/projected/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-kube-api-access-5tjbz\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597633 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.597710 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-logs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.611306 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.636650 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.672846 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.694148 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.703959 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.706615 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-run-httpd\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720286 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-config-data\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720349 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-scripts\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720449 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720533 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2fvm\" (UniqueName: \"kubernetes.io/projected/00b15610-4e40-4788-a09b-226a392b19ac-kube-api-access-t2fvm\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720570 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720609 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-config-data-custom\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720752 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-config-data\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720819 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tjbz\" (UniqueName: \"kubernetes.io/projected/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-kube-api-access-5tjbz\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720855 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-log-httpd\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720894 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720969 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-logs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.720999 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-scripts\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.721068 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.721124 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.721211 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.723914 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-etc-machine-id\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.724351 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-logs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.739886 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.751505 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.775148 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-public-tls-certs\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.775976 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-scripts\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.776025 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-config-data\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.776464 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.798056 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tjbz\" (UniqueName: \"kubernetes.io/projected/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-kube-api-access-5tjbz\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.811253 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/adaaf313-4d60-4bbb-b4a9-8e0faddc265f-config-data-custom\") pod \"cinder-api-0\" (UID: \"adaaf313-4d60-4bbb-b4a9-8e0faddc265f\") " pod="openstack/cinder-api-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824443 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2fvm\" (UniqueName: \"kubernetes.io/projected/00b15610-4e40-4788-a09b-226a392b19ac-kube-api-access-t2fvm\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824494 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824585 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-config-data\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824618 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-log-httpd\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824671 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-scripts\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824743 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.824788 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-run-httpd\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.825624 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-run-httpd\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.825891 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-log-httpd\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.833401 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.834033 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-scripts\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.834538 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-config-data\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.835247 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.857312 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2fvm\" (UniqueName: \"kubernetes.io/projected/00b15610-4e40-4788-a09b-226a392b19ac-kube-api-access-t2fvm\") pod \"ceilometer-0\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " pod="openstack/ceilometer-0" Jan 30 17:17:36 crc kubenswrapper[4712]: I0130 17:17:36.935740 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.063236 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.384062 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" event={"ID":"0cad21e9-9d68-4f77-820b-0c1641e81e72","Type":"ContainerStarted","Data":"6bc5ecf5be98e581b0732daa716c166c81c9df56c2418d92feeada598b65d5de"} Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.384318 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" event={"ID":"0cad21e9-9d68-4f77-820b-0c1641e81e72","Type":"ContainerStarted","Data":"60955a2a46f3866cd81dfe89ad004e32c46efd1be74767e0129ced825f5af29d"} Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.384349 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.384367 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.408225 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-68c577d787-bljqj" event={"ID":"b9473151-e9e1-4388-8134-fb8fd45d0257","Type":"ContainerStarted","Data":"d2c39049b9293c5534319e71a06ccc72c655b50a658fdf58be076f120cdd653c"} Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.408922 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.424489 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" podStartSLOduration=10.424470607 podStartE2EDuration="10.424470607s" podCreationTimestamp="2026-01-30 17:17:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:37.423269518 +0000 UTC m=+1394.330278987" watchObservedRunningTime="2026-01-30 17:17:37.424470607 +0000 UTC m=+1394.331480076" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.424920 4712 generic.go:334] "Generic (PLEG): container finished" podID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerID="55f5e38662d9207fd042d24ee573ecd40ff09380de4b15a5e29ff9541f1211a0" exitCode=0 Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.424943 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" event={"ID":"7e92eef8-fc7a-4b92-8a68-95d37b647aa4","Type":"ContainerDied","Data":"55f5e38662d9207fd042d24ee573ecd40ff09380de4b15a5e29ff9541f1211a0"} Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.467449 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75595bd865-rqm2l" event={"ID":"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9","Type":"ContainerStarted","Data":"2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7"} Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.468621 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.484820 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-68c577d787-bljqj" podStartSLOduration=6.484785467 podStartE2EDuration="6.484785467s" podCreationTimestamp="2026-01-30 17:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:37.473365712 +0000 UTC m=+1394.380375211" watchObservedRunningTime="2026-01-30 17:17:37.484785467 +0000 UTC m=+1394.391794936" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.571068 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-75595bd865-rqm2l" podStartSLOduration=14.571052440999999 podStartE2EDuration="14.571052441s" podCreationTimestamp="2026-01-30 17:17:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:37.549351219 +0000 UTC m=+1394.456360688" watchObservedRunningTime="2026-01-30 17:17:37.571052441 +0000 UTC m=+1394.478061910" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.706020 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.857938 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" path="/var/lib/kubelet/pods/1ee0a8fb-a77e-4786-9ba2-93805c9cb272/volumes" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.858922 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3770729e-1882-447d-bc3f-46413301437f" path="/var/lib/kubelet/pods/3770729e-1882-447d-bc3f-46413301437f/volumes" Jan 30 17:17:37 crc kubenswrapper[4712]: I0130 17:17:37.919179 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:38 crc kubenswrapper[4712]: I0130 17:17:38.525195 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"adaaf313-4d60-4bbb-b4a9-8e0faddc265f","Type":"ContainerStarted","Data":"5040ebbc15f94d352d8f4d27119b9e94bf5a9a5e81b6e854aba196bb7ca2332e"} Jan 30 17:17:38 crc kubenswrapper[4712]: I0130 17:17:38.533238 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" event={"ID":"7e92eef8-fc7a-4b92-8a68-95d37b647aa4","Type":"ContainerStarted","Data":"db7d4354619efe82d62cedb1e6502be85189d0715b9036669b99393e2d070b8c"} Jan 30 17:17:38 crc kubenswrapper[4712]: I0130 17:17:38.534386 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:38 crc kubenswrapper[4712]: I0130 17:17:38.537923 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerStarted","Data":"1e44399569a0d55b90247708b7765e568f78a6133ac43e2775b2ef4615472cd7"} Jan 30 17:17:38 crc kubenswrapper[4712]: I0130 17:17:38.566201 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" podStartSLOduration=14.566180291 podStartE2EDuration="14.566180291s" podCreationTimestamp="2026-01-30 17:17:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:38.554749076 +0000 UTC m=+1395.461758545" watchObservedRunningTime="2026-01-30 17:17:38.566180291 +0000 UTC m=+1395.473189760" Jan 30 17:17:39 crc kubenswrapper[4712]: I0130 17:17:39.554166 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"adaaf313-4d60-4bbb-b4a9-8e0faddc265f","Type":"ContainerStarted","Data":"8838c6040f23c39005bc8b015121844dabc95e46d07e9121d15f1c9fd07026a5"} Jan 30 17:17:39 crc kubenswrapper[4712]: I0130 17:17:39.610507 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="1ee0a8fb-a77e-4786-9ba2-93805c9cb272" containerName="cinder-api" probeResult="failure" output="Get \"http://10.217.0.175:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.603279 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ff85c4bb5-kfdkk" event={"ID":"3199e2b6-4450-48fb-9809-3467dce0d5bd","Type":"ContainerStarted","Data":"f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.604995 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.603394 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-7ff85c4bb5-kfdkk" podUID="3199e2b6-4450-48fb-9809-3467dce0d5bd" containerName="heat-api" containerID="cri-o://f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513" gracePeriod=60 Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.607123 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerStarted","Data":"31919142cf3008c753e7f6d8f61bb7b9f3db314c329a996e1eb03a519c7f835f"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.611630 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" event={"ID":"0f8a0938-d2f2-47bc-b923-fdcba236851f","Type":"ContainerStarted","Data":"4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.611917 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.611692 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" podUID="0f8a0938-d2f2-47bc-b923-fdcba236851f" containerName="heat-cfnapi" containerID="cri-o://4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb" gracePeriod=60 Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.618567 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" event={"ID":"e18788f5-d1c7-435c-a619-784ddb7bdb56","Type":"ContainerStarted","Data":"79d0502498ea21bfa44075ccb0b6351c545f0b92ad5ee9f836c27420b0775f8c"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.618737 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.620517 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" event={"ID":"d6013c6b-ae4f-4632-917e-672f5a538653","Type":"ContainerStarted","Data":"88b20540161bae9fd09f9b9cc7efb656fbfd6ae58d43e536fa4811ce0e6091e5"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.620651 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.627201 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-7ff85c4bb5-kfdkk" podStartSLOduration=13.584583406 podStartE2EDuration="17.627181551s" podCreationTimestamp="2026-01-30 17:17:24 +0000 UTC" firstStartedPulling="2026-01-30 17:17:36.210640469 +0000 UTC m=+1393.117649938" lastFinishedPulling="2026-01-30 17:17:40.253238614 +0000 UTC m=+1397.160248083" observedRunningTime="2026-01-30 17:17:41.621098444 +0000 UTC m=+1398.528107913" watchObservedRunningTime="2026-01-30 17:17:41.627181551 +0000 UTC m=+1398.534191020" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.642554 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"adaaf313-4d60-4bbb-b4a9-8e0faddc265f","Type":"ContainerStarted","Data":"7623831fb7391a0013181a087c13f5157986087932524d058bb24dcc1d171f2f"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.643449 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.647868 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-679854b776-gmq67" event={"ID":"6c3a1401-04c4-419c-98dc-23ca889b391a","Type":"ContainerStarted","Data":"bdd9f5ef791cd81664234bcc0b0b8aa3b5d58cebf9bcd05a83340c896bb3af8a"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.648823 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.655141 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56f4484db-n2zkj" event={"ID":"03d2e846-1967-4fce-8926-929318331866","Type":"ContainerStarted","Data":"0784a0e17bee14581d5f343ddb3407c96d0c761dba6fade36f10aea0ecdabcef"} Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.655752 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.670883 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" podStartSLOduration=6.528502439 podStartE2EDuration="10.670857651s" podCreationTimestamp="2026-01-30 17:17:31 +0000 UTC" firstStartedPulling="2026-01-30 17:17:36.122086361 +0000 UTC m=+1393.029095840" lastFinishedPulling="2026-01-30 17:17:40.264441583 +0000 UTC m=+1397.171451052" observedRunningTime="2026-01-30 17:17:41.646551516 +0000 UTC m=+1398.553560985" watchObservedRunningTime="2026-01-30 17:17:41.670857651 +0000 UTC m=+1398.577867120" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.702475 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" podStartSLOduration=12.381140199 podStartE2EDuration="17.701977829s" podCreationTimestamp="2026-01-30 17:17:24 +0000 UTC" firstStartedPulling="2026-01-30 17:17:34.912848384 +0000 UTC m=+1391.819857853" lastFinishedPulling="2026-01-30 17:17:40.233686014 +0000 UTC m=+1397.140695483" observedRunningTime="2026-01-30 17:17:41.680916522 +0000 UTC m=+1398.587925991" watchObservedRunningTime="2026-01-30 17:17:41.701977829 +0000 UTC m=+1398.608987298" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.728846 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" podStartSLOduration=4.555343423 podStartE2EDuration="8.728786363s" podCreationTimestamp="2026-01-30 17:17:33 +0000 UTC" firstStartedPulling="2026-01-30 17:17:36.12246182 +0000 UTC m=+1393.029471289" lastFinishedPulling="2026-01-30 17:17:40.29590476 +0000 UTC m=+1397.202914229" observedRunningTime="2026-01-30 17:17:41.706828665 +0000 UTC m=+1398.613838134" watchObservedRunningTime="2026-01-30 17:17:41.728786363 +0000 UTC m=+1398.635795832" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.752774 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-679854b776-gmq67" podStartSLOduration=4.627965059 podStartE2EDuration="8.752755459s" podCreationTimestamp="2026-01-30 17:17:33 +0000 UTC" firstStartedPulling="2026-01-30 17:17:36.128421933 +0000 UTC m=+1393.035431402" lastFinishedPulling="2026-01-30 17:17:40.253212333 +0000 UTC m=+1397.160221802" observedRunningTime="2026-01-30 17:17:41.743653421 +0000 UTC m=+1398.650662880" watchObservedRunningTime="2026-01-30 17:17:41.752755459 +0000 UTC m=+1398.659764928" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.784926 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=5.784910182 podStartE2EDuration="5.784910182s" podCreationTimestamp="2026-01-30 17:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:17:41.783258812 +0000 UTC m=+1398.690268291" watchObservedRunningTime="2026-01-30 17:17:41.784910182 +0000 UTC m=+1398.691919651" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.822733 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-56f4484db-n2zkj" podStartSLOduration=6.655584402 podStartE2EDuration="10.82270993s" podCreationTimestamp="2026-01-30 17:17:31 +0000 UTC" firstStartedPulling="2026-01-30 17:17:36.130452192 +0000 UTC m=+1393.037461671" lastFinishedPulling="2026-01-30 17:17:40.29757774 +0000 UTC m=+1397.204587199" observedRunningTime="2026-01-30 17:17:41.811207894 +0000 UTC m=+1398.718217363" watchObservedRunningTime="2026-01-30 17:17:41.82270993 +0000 UTC m=+1398.729719399" Jan 30 17:17:41 crc kubenswrapper[4712]: I0130 17:17:41.883495 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:17:42 crc kubenswrapper[4712]: I0130 17:17:42.679906 4712 generic.go:334] "Generic (PLEG): container finished" podID="03d2e846-1967-4fce-8926-929318331866" containerID="0784a0e17bee14581d5f343ddb3407c96d0c761dba6fade36f10aea0ecdabcef" exitCode=1 Jan 30 17:17:42 crc kubenswrapper[4712]: I0130 17:17:42.680093 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56f4484db-n2zkj" event={"ID":"03d2e846-1967-4fce-8926-929318331866","Type":"ContainerDied","Data":"0784a0e17bee14581d5f343ddb3407c96d0c761dba6fade36f10aea0ecdabcef"} Jan 30 17:17:42 crc kubenswrapper[4712]: I0130 17:17:42.680612 4712 scope.go:117] "RemoveContainer" containerID="0784a0e17bee14581d5f343ddb3407c96d0c761dba6fade36f10aea0ecdabcef" Jan 30 17:17:42 crc kubenswrapper[4712]: I0130 17:17:42.687176 4712 generic.go:334] "Generic (PLEG): container finished" podID="d6013c6b-ae4f-4632-917e-672f5a538653" containerID="88b20540161bae9fd09f9b9cc7efb656fbfd6ae58d43e536fa4811ce0e6091e5" exitCode=1 Jan 30 17:17:42 crc kubenswrapper[4712]: I0130 17:17:42.688005 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" event={"ID":"d6013c6b-ae4f-4632-917e-672f5a538653","Type":"ContainerDied","Data":"88b20540161bae9fd09f9b9cc7efb656fbfd6ae58d43e536fa4811ce0e6091e5"} Jan 30 17:17:42 crc kubenswrapper[4712]: I0130 17:17:42.688306 4712 scope.go:117] "RemoveContainer" containerID="88b20540161bae9fd09f9b9cc7efb656fbfd6ae58d43e536fa4811ce0e6091e5" Jan 30 17:17:43 crc kubenswrapper[4712]: I0130 17:17:43.076686 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:43 crc kubenswrapper[4712]: I0130 17:17:43.078014 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-7f9b7fd987-g2xkh" Jan 30 17:17:44 crc kubenswrapper[4712]: I0130 17:17:44.638051 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:17:44 crc kubenswrapper[4712]: I0130 17:17:44.755915 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4jhw"] Jan 30 17:17:44 crc kubenswrapper[4712]: I0130 17:17:44.756368 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerName="dnsmasq-dns" containerID="cri-o://dece00b57cd9d38e59bc722d45813a4556c4f0da6b0b84120f39417b0893c56c" gracePeriod=10 Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.073261 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.073312 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.075097 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.355944 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.875285 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerStarted","Data":"dcb6f04206ac13ae1fb34ab14cc89057add6d4e56d6ece0f87e3f832ad0e277e"} Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.885973 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" event={"ID":"d6013c6b-ae4f-4632-917e-672f5a538653","Type":"ContainerStarted","Data":"a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365"} Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.886137 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.901090 4712 generic.go:334] "Generic (PLEG): container finished" podID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerID="dece00b57cd9d38e59bc722d45813a4556c4f0da6b0b84120f39417b0893c56c" exitCode=0 Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.901230 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" event={"ID":"128af9ea-eb98-4631-9e61-af1a9d26e246","Type":"ContainerDied","Data":"dece00b57cd9d38e59bc722d45813a4556c4f0da6b0b84120f39417b0893c56c"} Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.935350 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56f4484db-n2zkj" event={"ID":"03d2e846-1967-4fce-8926-929318331866","Type":"ContainerStarted","Data":"9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337"} Jan 30 17:17:45 crc kubenswrapper[4712]: I0130 17:17:45.935997 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.203059 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.313394 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cdc8\" (UniqueName: \"kubernetes.io/projected/128af9ea-eb98-4631-9e61-af1a9d26e246-kube-api-access-7cdc8\") pod \"128af9ea-eb98-4631-9e61-af1a9d26e246\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.315025 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-sb\") pod \"128af9ea-eb98-4631-9e61-af1a9d26e246\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.315107 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-svc\") pod \"128af9ea-eb98-4631-9e61-af1a9d26e246\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.315157 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-config\") pod \"128af9ea-eb98-4631-9e61-af1a9d26e246\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.315260 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-nb\") pod \"128af9ea-eb98-4631-9e61-af1a9d26e246\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.315315 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-swift-storage-0\") pod \"128af9ea-eb98-4631-9e61-af1a9d26e246\" (UID: \"128af9ea-eb98-4631-9e61-af1a9d26e246\") " Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.338845 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/128af9ea-eb98-4631-9e61-af1a9d26e246-kube-api-access-7cdc8" (OuterVolumeSpecName: "kube-api-access-7cdc8") pod "128af9ea-eb98-4631-9e61-af1a9d26e246" (UID: "128af9ea-eb98-4631-9e61-af1a9d26e246"). InnerVolumeSpecName "kube-api-access-7cdc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.417090 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-config" (OuterVolumeSpecName: "config") pod "128af9ea-eb98-4631-9e61-af1a9d26e246" (UID: "128af9ea-eb98-4631-9e61-af1a9d26e246"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.422611 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cdc8\" (UniqueName: \"kubernetes.io/projected/128af9ea-eb98-4631-9e61-af1a9d26e246-kube-api-access-7cdc8\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.422649 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.449766 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "128af9ea-eb98-4631-9e61-af1a9d26e246" (UID: "128af9ea-eb98-4631-9e61-af1a9d26e246"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.457313 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "128af9ea-eb98-4631-9e61-af1a9d26e246" (UID: "128af9ea-eb98-4631-9e61-af1a9d26e246"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.475731 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "128af9ea-eb98-4631-9e61-af1a9d26e246" (UID: "128af9ea-eb98-4631-9e61-af1a9d26e246"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.478739 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "128af9ea-eb98-4631-9e61-af1a9d26e246" (UID: "128af9ea-eb98-4631-9e61-af1a9d26e246"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.524497 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.524526 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.524537 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.524547 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/128af9ea-eb98-4631-9e61-af1a9d26e246-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.952790 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" event={"ID":"128af9ea-eb98-4631-9e61-af1a9d26e246","Type":"ContainerDied","Data":"26d61a699d389e54320d94d2d64164245740ced48e08223cb1dca68b0ccd55a0"} Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.953136 4712 scope.go:117] "RemoveContainer" containerID="dece00b57cd9d38e59bc722d45813a4556c4f0da6b0b84120f39417b0893c56c" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.953277 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-t4jhw" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.960069 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"ca2a20bb-6a1a-4d8e-8f87-6478ac901d09","Type":"ContainerStarted","Data":"4d4c0dfcc889e9385c91ee05a112a43d91ab7f565d30b5163739db3a198a42a5"} Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.975250 4712 generic.go:334] "Generic (PLEG): container finished" podID="03d2e846-1967-4fce-8926-929318331866" containerID="9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337" exitCode=1 Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.975350 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56f4484db-n2zkj" event={"ID":"03d2e846-1967-4fce-8926-929318331866","Type":"ContainerDied","Data":"9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337"} Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.976078 4712 scope.go:117] "RemoveContainer" containerID="9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337" Jan 30 17:17:46 crc kubenswrapper[4712]: E0130 17:17:46.976339 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-56f4484db-n2zkj_openstack(03d2e846-1967-4fce-8926-929318331866)\"" pod="openstack/heat-api-56f4484db-n2zkj" podUID="03d2e846-1967-4fce-8926-929318331866" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.984981 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.410772297 podStartE2EDuration="31.984959839s" podCreationTimestamp="2026-01-30 17:17:15 +0000 UTC" firstStartedPulling="2026-01-30 17:17:16.396813574 +0000 UTC m=+1373.303823043" lastFinishedPulling="2026-01-30 17:17:45.971001116 +0000 UTC m=+1402.878010585" observedRunningTime="2026-01-30 17:17:46.977776926 +0000 UTC m=+1403.884786395" watchObservedRunningTime="2026-01-30 17:17:46.984959839 +0000 UTC m=+1403.891969308" Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.987686 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerStarted","Data":"75b1718879e6211f9bec723776a4c2f537b1973ab23d1d860606bc936b5041f6"} Jan 30 17:17:46 crc kubenswrapper[4712]: I0130 17:17:46.998680 4712 scope.go:117] "RemoveContainer" containerID="13ae25ae0ef25990774e239cac23a8823334e861a162e3c9700b7555ca6e960c" Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.013291 4712 generic.go:334] "Generic (PLEG): container finished" podID="d6013c6b-ae4f-4632-917e-672f5a538653" containerID="a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365" exitCode=1 Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.013332 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" event={"ID":"d6013c6b-ae4f-4632-917e-672f5a538653","Type":"ContainerDied","Data":"a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365"} Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.013952 4712 scope.go:117] "RemoveContainer" containerID="a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365" Jan 30 17:17:47 crc kubenswrapper[4712]: E0130 17:17:47.014159 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-778dc6dbc4-rwjl5_openstack(d6013c6b-ae4f-4632-917e-672f5a538653)\"" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.046770 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4jhw"] Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.051532 4712 scope.go:117] "RemoveContainer" containerID="0784a0e17bee14581d5f343ddb3407c96d0c761dba6fade36f10aea0ecdabcef" Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.062024 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-t4jhw"] Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.162958 4712 scope.go:117] "RemoveContainer" containerID="88b20540161bae9fd09f9b9cc7efb656fbfd6ae58d43e536fa4811ce0e6091e5" Jan 30 17:17:47 crc kubenswrapper[4712]: E0130 17:17:47.270314 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod128af9ea_eb98_4631_9e61_af1a9d26e246.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod128af9ea_eb98_4631_9e61_af1a9d26e246.slice/crio-26d61a699d389e54320d94d2d64164245740ced48e08223cb1dca68b0ccd55a0\": RecentStats: unable to find data in memory cache]" Jan 30 17:17:47 crc kubenswrapper[4712]: I0130 17:17:47.814694 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" path="/var/lib/kubelet/pods/128af9ea-eb98-4631-9e61-af1a9d26e246/volumes" Jan 30 17:17:48 crc kubenswrapper[4712]: I0130 17:17:48.026765 4712 scope.go:117] "RemoveContainer" containerID="9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337" Jan 30 17:17:48 crc kubenswrapper[4712]: E0130 17:17:48.027047 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-56f4484db-n2zkj_openstack(03d2e846-1967-4fce-8926-929318331866)\"" pod="openstack/heat-api-56f4484db-n2zkj" podUID="03d2e846-1967-4fce-8926-929318331866" Jan 30 17:17:48 crc kubenswrapper[4712]: I0130 17:17:48.029097 4712 scope.go:117] "RemoveContainer" containerID="a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365" Jan 30 17:17:48 crc kubenswrapper[4712]: E0130 17:17:48.029329 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-778dc6dbc4-rwjl5_openstack(d6013c6b-ae4f-4632-917e-672f5a538653)\"" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.072493 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerStarted","Data":"0a9437a62a013f05b147b4a5f8aa5e2a92d9820b4b175a040d4b42032b397f87"} Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.073161 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-central-agent" containerID="cri-o://31919142cf3008c753e7f6d8f61bb7b9f3db314c329a996e1eb03a519c7f835f" gracePeriod=30 Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.073414 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.073651 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="proxy-httpd" containerID="cri-o://0a9437a62a013f05b147b4a5f8aa5e2a92d9820b4b175a040d4b42032b397f87" gracePeriod=30 Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.073692 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="sg-core" containerID="cri-o://75b1718879e6211f9bec723776a4c2f537b1973ab23d1d860606bc936b5041f6" gracePeriod=30 Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.073727 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-notification-agent" containerID="cri-o://dcb6f04206ac13ae1fb34ab14cc89057add6d4e56d6ece0f87e3f832ad0e277e" gracePeriod=30 Jan 30 17:17:50 crc kubenswrapper[4712]: I0130 17:17:50.114786 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.197721792 podStartE2EDuration="14.114742062s" podCreationTimestamp="2026-01-30 17:17:36 +0000 UTC" firstStartedPulling="2026-01-30 17:17:37.946317131 +0000 UTC m=+1394.853326600" lastFinishedPulling="2026-01-30 17:17:48.863337391 +0000 UTC m=+1405.770346870" observedRunningTime="2026-01-30 17:17:50.099956276 +0000 UTC m=+1407.006966025" watchObservedRunningTime="2026-01-30 17:17:50.114742062 +0000 UTC m=+1407.021751531" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.082677 4712 generic.go:334] "Generic (PLEG): container finished" podID="00b15610-4e40-4788-a09b-226a392b19ac" containerID="0a9437a62a013f05b147b4a5f8aa5e2a92d9820b4b175a040d4b42032b397f87" exitCode=0 Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.082952 4712 generic.go:334] "Generic (PLEG): container finished" podID="00b15610-4e40-4788-a09b-226a392b19ac" containerID="75b1718879e6211f9bec723776a4c2f537b1973ab23d1d860606bc936b5041f6" exitCode=2 Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.082961 4712 generic.go:334] "Generic (PLEG): container finished" podID="00b15610-4e40-4788-a09b-226a392b19ac" containerID="dcb6f04206ac13ae1fb34ab14cc89057add6d4e56d6ece0f87e3f832ad0e277e" exitCode=0 Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.082721 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerDied","Data":"0a9437a62a013f05b147b4a5f8aa5e2a92d9820b4b175a040d4b42032b397f87"} Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.082997 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerDied","Data":"75b1718879e6211f9bec723776a4c2f537b1973ab23d1d860606bc936b5041f6"} Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.083011 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerDied","Data":"dcb6f04206ac13ae1fb34ab14cc89057add6d4e56d6ece0f87e3f832ad0e277e"} Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.628818 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.629561 4712 scope.go:117] "RemoveContainer" containerID="a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365" Jan 30 17:17:51 crc kubenswrapper[4712]: E0130 17:17:51.629761 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-778dc6dbc4-rwjl5_openstack(d6013c6b-ae4f-4632-917e-672f5a538653)\"" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.711117 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-68c577d787-bljqj" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.755610 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.756351 4712 scope.go:117] "RemoveContainer" containerID="9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337" Jan 30 17:17:51 crc kubenswrapper[4712]: E0130 17:17:51.756651 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-56f4484db-n2zkj_openstack(03d2e846-1967-4fce-8926-929318331866)\"" pod="openstack/heat-api-56f4484db-n2zkj" podUID="03d2e846-1967-4fce-8926-929318331866" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.762681 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75595bd865-rqm2l"] Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.767687 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-75595bd865-rqm2l" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerName="heat-engine" containerID="cri-o://2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" gracePeriod=60 Jan 30 17:17:51 crc kubenswrapper[4712]: E0130 17:17:51.778625 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 17:17:51 crc kubenswrapper[4712]: E0130 17:17:51.784979 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 17:17:51 crc kubenswrapper[4712]: E0130 17:17:51.786501 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 17:17:51 crc kubenswrapper[4712]: E0130 17:17:51.786676 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75595bd865-rqm2l" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerName="heat-engine" Jan 30 17:17:51 crc kubenswrapper[4712]: I0130 17:17:51.947045 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="adaaf313-4d60-4bbb-b4a9-8e0faddc265f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.188:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.183151 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-679854b776-gmq67" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.245015 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-56f4484db-n2zkj"] Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.428198 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.825466 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.874837 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rb67p\" (UniqueName: \"kubernetes.io/projected/03d2e846-1967-4fce-8926-929318331866-kube-api-access-rb67p\") pod \"03d2e846-1967-4fce-8926-929318331866\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.875129 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-combined-ca-bundle\") pod \"03d2e846-1967-4fce-8926-929318331866\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.875181 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data\") pod \"03d2e846-1967-4fce-8926-929318331866\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.875200 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data-custom\") pod \"03d2e846-1967-4fce-8926-929318331866\" (UID: \"03d2e846-1967-4fce-8926-929318331866\") " Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.892075 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03d2e846-1967-4fce-8926-929318331866-kube-api-access-rb67p" (OuterVolumeSpecName: "kube-api-access-rb67p") pod "03d2e846-1967-4fce-8926-929318331866" (UID: "03d2e846-1967-4fce-8926-929318331866"). InnerVolumeSpecName "kube-api-access-rb67p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.897842 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "03d2e846-1967-4fce-8926-929318331866" (UID: "03d2e846-1967-4fce-8926-929318331866"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.953991 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03d2e846-1967-4fce-8926-929318331866" (UID: "03d2e846-1967-4fce-8926-929318331866"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.976979 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.977013 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rb67p\" (UniqueName: \"kubernetes.io/projected/03d2e846-1967-4fce-8926-929318331866-kube-api-access-rb67p\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.977022 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:52 crc kubenswrapper[4712]: I0130 17:17:52.997957 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data" (OuterVolumeSpecName: "config-data") pod "03d2e846-1967-4fce-8926-929318331866" (UID: "03d2e846-1967-4fce-8926-929318331866"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.078374 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03d2e846-1967-4fce-8926-929318331866-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.098913 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-56f4484db-n2zkj" event={"ID":"03d2e846-1967-4fce-8926-929318331866","Type":"ContainerDied","Data":"ad1c5c1aa8972488196182137dda068b2a81767fd8207d1ef43c3cac680594d3"} Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.098962 4712 scope.go:117] "RemoveContainer" containerID="9e69efffc2bf89adf636d42bfc255084842199bff2d3030a88740404ee73a337" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.099059 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-56f4484db-n2zkj" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.156028 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.205923 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-56f4484db-n2zkj"] Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.232738 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-56f4484db-n2zkj"] Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.457341 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-64f88d7685-rpkd8" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.515188 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-778dc6dbc4-rwjl5"] Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.862131 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03d2e846-1967-4fce-8926-929318331866" path="/var/lib/kubelet/pods/03d2e846-1967-4fce-8926-929318331866/volumes" Jan 30 17:17:53 crc kubenswrapper[4712]: I0130 17:17:53.949967 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="adaaf313-4d60-4bbb-b4a9-8e0faddc265f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.188:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.108075 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" event={"ID":"d6013c6b-ae4f-4632-917e-672f5a538653","Type":"ContainerDied","Data":"64ffcad49d4a9133243885272453e6ad35d4cedc59d71737f9883736ff4680d5"} Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.108125 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64ffcad49d4a9133243885272453e6ad35d4cedc59d71737f9883736ff4680d5" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.135860 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.217450 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6kz\" (UniqueName: \"kubernetes.io/projected/d6013c6b-ae4f-4632-917e-672f5a538653-kube-api-access-hc6kz\") pod \"d6013c6b-ae4f-4632-917e-672f5a538653\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.217520 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-combined-ca-bundle\") pod \"d6013c6b-ae4f-4632-917e-672f5a538653\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.217596 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data-custom\") pod \"d6013c6b-ae4f-4632-917e-672f5a538653\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.217629 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data\") pod \"d6013c6b-ae4f-4632-917e-672f5a538653\" (UID: \"d6013c6b-ae4f-4632-917e-672f5a538653\") " Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.231974 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d6013c6b-ae4f-4632-917e-672f5a538653" (UID: "d6013c6b-ae4f-4632-917e-672f5a538653"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.242048 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6013c6b-ae4f-4632-917e-672f5a538653-kube-api-access-hc6kz" (OuterVolumeSpecName: "kube-api-access-hc6kz") pod "d6013c6b-ae4f-4632-917e-672f5a538653" (UID: "d6013c6b-ae4f-4632-917e-672f5a538653"). InnerVolumeSpecName "kube-api-access-hc6kz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:54 crc kubenswrapper[4712]: E0130 17:17:54.279113 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 17:17:54 crc kubenswrapper[4712]: E0130 17:17:54.309332 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 17:17:54 crc kubenswrapper[4712]: E0130 17:17:54.320922 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 30 17:17:54 crc kubenswrapper[4712]: E0130 17:17:54.321006 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-75595bd865-rqm2l" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerName="heat-engine" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.328408 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6kz\" (UniqueName: \"kubernetes.io/projected/d6013c6b-ae4f-4632-917e-672f5a538653-kube-api-access-hc6kz\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.328459 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.350971 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6013c6b-ae4f-4632-917e-672f5a538653" (UID: "d6013c6b-ae4f-4632-917e-672f5a538653"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.370919 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data" (OuterVolumeSpecName: "config-data") pod "d6013c6b-ae4f-4632-917e-672f5a538653" (UID: "d6013c6b-ae4f-4632-917e-672f5a538653"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.430025 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:54 crc kubenswrapper[4712]: I0130 17:17:54.430314 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6013c6b-ae4f-4632-917e-672f5a538653-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:55 crc kubenswrapper[4712]: I0130 17:17:55.108352 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:17:55 crc kubenswrapper[4712]: I0130 17:17:55.134291 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-778dc6dbc4-rwjl5" Jan 30 17:17:55 crc kubenswrapper[4712]: I0130 17:17:55.177661 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-778dc6dbc4-rwjl5"] Jan 30 17:17:55 crc kubenswrapper[4712]: I0130 17:17:55.190491 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-778dc6dbc4-rwjl5"] Jan 30 17:17:55 crc kubenswrapper[4712]: I0130 17:17:55.355330 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:17:55 crc kubenswrapper[4712]: I0130 17:17:55.812497 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" path="/var/lib/kubelet/pods/d6013c6b-ae4f-4632-917e-672f5a538653/volumes" Jan 30 17:17:56 crc kubenswrapper[4712]: I0130 17:17:56.952039 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="adaaf313-4d60-4bbb-b4a9-8e0faddc265f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.188:8776/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:56 crc kubenswrapper[4712]: I0130 17:17:56.963295 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 30 17:17:57 crc kubenswrapper[4712]: I0130 17:17:57.962126 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.105838 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data-custom\") pod \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.106004 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data\") pod \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.106076 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjzvd\" (UniqueName: \"kubernetes.io/projected/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-kube-api-access-cjzvd\") pod \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.106113 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-combined-ca-bundle\") pod \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\" (UID: \"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9\") " Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.116810 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-kube-api-access-cjzvd" (OuterVolumeSpecName: "kube-api-access-cjzvd") pod "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" (UID: "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9"). InnerVolumeSpecName "kube-api-access-cjzvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.119129 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" (UID: "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.150669 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" (UID: "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.182696 4712 generic.go:334] "Generic (PLEG): container finished" podID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" exitCode=0 Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.182774 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75595bd865-rqm2l" event={"ID":"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9","Type":"ContainerDied","Data":"2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7"} Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.182826 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-75595bd865-rqm2l" event={"ID":"e4ca8cf3-8ef1-4170-815e-15c4ce5826f9","Type":"ContainerDied","Data":"dbeb4ddeb91abdeac4c39af5a3cb64d139629ea3eb31d665f36b826982aaff07"} Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.182850 4712 scope.go:117] "RemoveContainer" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.183045 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-75595bd865-rqm2l" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.214980 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.215194 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjzvd\" (UniqueName: \"kubernetes.io/projected/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-kube-api-access-cjzvd\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.215210 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.242949 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data" (OuterVolumeSpecName: "config-data") pod "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" (UID: "e4ca8cf3-8ef1-4170-815e-15c4ce5826f9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.297460 4712 scope.go:117] "RemoveContainer" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" Jan 30 17:17:58 crc kubenswrapper[4712]: E0130 17:17:58.298013 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7\": container with ID starting with 2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7 not found: ID does not exist" containerID="2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.298073 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7"} err="failed to get container status \"2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7\": rpc error: code = NotFound desc = could not find container \"2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7\": container with ID starting with 2c28bbd68b2f2862e4fd6d47647956ae1703b77f72bc18cdd3fbabc5f15628e7 not found: ID does not exist" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.316884 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.520567 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-75595bd865-rqm2l"] Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.530187 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-75595bd865-rqm2l"] Jan 30 17:17:58 crc kubenswrapper[4712]: I0130 17:17:58.954984 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="adaaf313-4d60-4bbb-b4a9-8e0faddc265f" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.188:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.228714 4712 generic.go:334] "Generic (PLEG): container finished" podID="00b15610-4e40-4788-a09b-226a392b19ac" containerID="31919142cf3008c753e7f6d8f61bb7b9f3db314c329a996e1eb03a519c7f835f" exitCode=0 Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.228813 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerDied","Data":"31919142cf3008c753e7f6d8f61bb7b9f3db314c329a996e1eb03a519c7f835f"} Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.489958 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.657642 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-config-data\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.657734 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2fvm\" (UniqueName: \"kubernetes.io/projected/00b15610-4e40-4788-a09b-226a392b19ac-kube-api-access-t2fvm\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.657844 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-combined-ca-bundle\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.657876 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-scripts\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.657957 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-run-httpd\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.658031 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-log-httpd\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.658082 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-sg-core-conf-yaml\") pod \"00b15610-4e40-4788-a09b-226a392b19ac\" (UID: \"00b15610-4e40-4788-a09b-226a392b19ac\") " Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.663009 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.664248 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.677437 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00b15610-4e40-4788-a09b-226a392b19ac-kube-api-access-t2fvm" (OuterVolumeSpecName: "kube-api-access-t2fvm") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "kube-api-access-t2fvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.693024 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-scripts" (OuterVolumeSpecName: "scripts") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.760256 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2fvm\" (UniqueName: \"kubernetes.io/projected/00b15610-4e40-4788-a09b-226a392b19ac-kube-api-access-t2fvm\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.760303 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.760316 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.760327 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/00b15610-4e40-4788-a09b-226a392b19ac-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.829076 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" path="/var/lib/kubelet/pods/e4ca8cf3-8ef1-4170-815e-15c4ce5826f9/volumes" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.893337 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.937630 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.950159 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-config-data" (OuterVolumeSpecName: "config-data") pod "00b15610-4e40-4788-a09b-226a392b19ac" (UID: "00b15610-4e40-4788-a09b-226a392b19ac"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.967963 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.967991 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:17:59 crc kubenswrapper[4712]: I0130 17:17:59.968000 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/00b15610-4e40-4788-a09b-226a392b19ac-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.247236 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"00b15610-4e40-4788-a09b-226a392b19ac","Type":"ContainerDied","Data":"1e44399569a0d55b90247708b7765e568f78a6133ac43e2775b2ef4615472cd7"} Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.247304 4712 scope.go:117] "RemoveContainer" containerID="0a9437a62a013f05b147b4a5f8aa5e2a92d9820b4b175a040d4b42032b397f87" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.247517 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.286006 4712 scope.go:117] "RemoveContainer" containerID="75b1718879e6211f9bec723776a4c2f537b1973ab23d1d860606bc936b5041f6" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.301717 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.334551 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.351132 4712 scope.go:117] "RemoveContainer" containerID="dcb6f04206ac13ae1fb34ab14cc89057add6d4e56d6ece0f87e3f832ad0e277e" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.371858 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372314 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerName="heat-engine" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372330 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerName="heat-engine" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372339 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-central-agent" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372347 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-central-agent" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372359 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerName="dnsmasq-dns" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372366 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerName="dnsmasq-dns" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372376 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d2e846-1967-4fce-8926-929318331866" containerName="heat-api" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372383 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d2e846-1967-4fce-8926-929318331866" containerName="heat-api" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372390 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="sg-core" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372396 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="sg-core" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372407 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" containerName="heat-cfnapi" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372413 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" containerName="heat-cfnapi" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372429 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" containerName="heat-cfnapi" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372435 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" containerName="heat-cfnapi" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372447 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="proxy-httpd" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372454 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="proxy-httpd" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372470 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-notification-agent" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372477 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-notification-agent" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372494 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerName="init" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372499 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerName="init" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372663 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="sg-core" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372672 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="128af9ea-eb98-4631-9e61-af1a9d26e246" containerName="dnsmasq-dns" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372685 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="proxy-httpd" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372698 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-notification-agent" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372707 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4ca8cf3-8ef1-4170-815e-15c4ce5826f9" containerName="heat-engine" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372713 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="00b15610-4e40-4788-a09b-226a392b19ac" containerName="ceilometer-central-agent" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372729 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d2e846-1967-4fce-8926-929318331866" containerName="heat-api" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372738 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="03d2e846-1967-4fce-8926-929318331866" containerName="heat-api" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372748 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" containerName="heat-cfnapi" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372757 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6013c6b-ae4f-4632-917e-672f5a538653" containerName="heat-cfnapi" Jan 30 17:18:00 crc kubenswrapper[4712]: E0130 17:18:00.372947 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03d2e846-1967-4fce-8926-929318331866" containerName="heat-api" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.372955 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="03d2e846-1967-4fce-8926-929318331866" containerName="heat-api" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.374343 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.378362 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.379314 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.379484 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.432845 4712 scope.go:117] "RemoveContainer" containerID="31919142cf3008c753e7f6d8f61bb7b9f3db314c329a996e1eb03a519c7f835f" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.476965 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.477206 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-scripts\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.477387 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69gj8\" (UniqueName: \"kubernetes.io/projected/8d1c445c-7242-46a7-88de-707d58473c8f-kube-api-access-69gj8\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.477452 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.477531 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-config-data\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.477833 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.477953 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.579873 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.579953 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-scripts\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.580006 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69gj8\" (UniqueName: \"kubernetes.io/projected/8d1c445c-7242-46a7-88de-707d58473c8f-kube-api-access-69gj8\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.580036 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.580075 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-config-data\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.580141 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.580178 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.581005 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-run-httpd\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.581014 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-log-httpd\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.599338 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-config-data\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.605645 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.609418 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.615286 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-scripts\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.616230 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69gj8\" (UniqueName: \"kubernetes.io/projected/8d1c445c-7242-46a7-88de-707d58473c8f-kube-api-access-69gj8\") pod \"ceilometer-0\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " pod="openstack/ceilometer-0" Jan 30 17:18:00 crc kubenswrapper[4712]: I0130 17:18:00.704834 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:01 crc kubenswrapper[4712]: I0130 17:18:01.496343 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:01 crc kubenswrapper[4712]: W0130 17:18:01.501413 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-0e6e372372086f2994311d191574074df42d413dc76e45b806a96da0326280b6 WatchSource:0}: Error finding container 0e6e372372086f2994311d191574074df42d413dc76e45b806a96da0326280b6: Status 404 returned error can't find the container with id 0e6e372372086f2994311d191574074df42d413dc76e45b806a96da0326280b6 Jan 30 17:18:01 crc kubenswrapper[4712]: I0130 17:18:01.506476 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:18:01 crc kubenswrapper[4712]: I0130 17:18:01.818969 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00b15610-4e40-4788-a09b-226a392b19ac" path="/var/lib/kubelet/pods/00b15610-4e40-4788-a09b-226a392b19ac/volumes" Jan 30 17:18:02 crc kubenswrapper[4712]: I0130 17:18:02.175451 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:02 crc kubenswrapper[4712]: I0130 17:18:02.277469 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerStarted","Data":"0e6e372372086f2994311d191574074df42d413dc76e45b806a96da0326280b6"} Jan 30 17:18:02 crc kubenswrapper[4712]: I0130 17:18:02.533504 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:18:02 crc kubenswrapper[4712]: I0130 17:18:02.533838 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-log" containerID="cri-o://55c7779ed294aab7b328c07c7eb3bab66291697e1db3139b1953c930c941b9fa" gracePeriod=30 Jan 30 17:18:02 crc kubenswrapper[4712]: I0130 17:18:02.534123 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-httpd" containerID="cri-o://17d9748dfc29f0d93829a519d709a6dc54f713414c4b13f981fee1b67535dad9" gracePeriod=30 Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.064035 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-78xzk"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.065369 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.081421 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-78xzk"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.147155 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9802a8ce-ca97-435d-b65a-1618358e986f-operator-scripts\") pod \"nova-api-db-create-78xzk\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.147950 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft6fh\" (UniqueName: \"kubernetes.io/projected/9802a8ce-ca97-435d-b65a-1618358e986f-kube-api-access-ft6fh\") pod \"nova-api-db-create-78xzk\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.189306 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xd6p5"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.191265 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.206554 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xd6p5"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.255123 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9802a8ce-ca97-435d-b65a-1618358e986f-operator-scripts\") pod \"nova-api-db-create-78xzk\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.255389 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ft6fh\" (UniqueName: \"kubernetes.io/projected/9802a8ce-ca97-435d-b65a-1618358e986f-kube-api-access-ft6fh\") pod \"nova-api-db-create-78xzk\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.255488 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96912dc-64a4-4735-91b2-ff0d019b8aa3-operator-scripts\") pod \"nova-cell0-db-create-xd6p5\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.255637 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4qn8\" (UniqueName: \"kubernetes.io/projected/c96912dc-64a4-4735-91b2-ff0d019b8aa3-kube-api-access-r4qn8\") pod \"nova-cell0-db-create-xd6p5\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.256555 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9802a8ce-ca97-435d-b65a-1618358e986f-operator-scripts\") pod \"nova-api-db-create-78xzk\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.311555 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ft6fh\" (UniqueName: \"kubernetes.io/projected/9802a8ce-ca97-435d-b65a-1618358e986f-kube-api-access-ft6fh\") pod \"nova-api-db-create-78xzk\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.322077 4712 generic.go:334] "Generic (PLEG): container finished" podID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerID="55c7779ed294aab7b328c07c7eb3bab66291697e1db3139b1953c930c941b9fa" exitCode=143 Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.322149 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9","Type":"ContainerDied","Data":"55c7779ed294aab7b328c07c7eb3bab66291697e1db3139b1953c930c941b9fa"} Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.347206 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerStarted","Data":"3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc"} Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.347261 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerStarted","Data":"fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3"} Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.354721 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-715d-account-create-update-jq2sd"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.356874 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96912dc-64a4-4735-91b2-ff0d019b8aa3-operator-scripts\") pod \"nova-cell0-db-create-xd6p5\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.356957 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4qn8\" (UniqueName: \"kubernetes.io/projected/c96912dc-64a4-4735-91b2-ff0d019b8aa3-kube-api-access-r4qn8\") pod \"nova-cell0-db-create-xd6p5\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.358130 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96912dc-64a4-4735-91b2-ff0d019b8aa3-operator-scripts\") pod \"nova-cell0-db-create-xd6p5\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.362039 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.372459 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.400086 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4qn8\" (UniqueName: \"kubernetes.io/projected/c96912dc-64a4-4735-91b2-ff0d019b8aa3-kube-api-access-r4qn8\") pod \"nova-cell0-db-create-xd6p5\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.421576 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-715d-account-create-update-jq2sd"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.438691 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.460039 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11d41c7b-df2e-492f-8126-1baa68733039-operator-scripts\") pod \"nova-api-715d-account-create-update-jq2sd\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.460207 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktk2p\" (UniqueName: \"kubernetes.io/projected/11d41c7b-df2e-492f-8126-1baa68733039-kube-api-access-ktk2p\") pod \"nova-api-715d-account-create-update-jq2sd\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.510770 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-hcvcv"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.512151 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.523669 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.537703 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hcvcv"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.567138 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bm9dr\" (UniqueName: \"kubernetes.io/projected/5d5a16c6-950f-48e7-b74e-60e6b6292839-kube-api-access-bm9dr\") pod \"nova-cell1-db-create-hcvcv\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.567240 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktk2p\" (UniqueName: \"kubernetes.io/projected/11d41c7b-df2e-492f-8126-1baa68733039-kube-api-access-ktk2p\") pod \"nova-api-715d-account-create-update-jq2sd\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.567278 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a16c6-950f-48e7-b74e-60e6b6292839-operator-scripts\") pod \"nova-cell1-db-create-hcvcv\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.567348 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11d41c7b-df2e-492f-8126-1baa68733039-operator-scripts\") pod \"nova-api-715d-account-create-update-jq2sd\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.568156 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11d41c7b-df2e-492f-8126-1baa68733039-operator-scripts\") pod \"nova-api-715d-account-create-update-jq2sd\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.599471 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktk2p\" (UniqueName: \"kubernetes.io/projected/11d41c7b-df2e-492f-8126-1baa68733039-kube-api-access-ktk2p\") pod \"nova-api-715d-account-create-update-jq2sd\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.668899 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bm9dr\" (UniqueName: \"kubernetes.io/projected/5d5a16c6-950f-48e7-b74e-60e6b6292839-kube-api-access-bm9dr\") pod \"nova-cell1-db-create-hcvcv\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.669236 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a16c6-950f-48e7-b74e-60e6b6292839-operator-scripts\") pod \"nova-cell1-db-create-hcvcv\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.670193 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a16c6-950f-48e7-b74e-60e6b6292839-operator-scripts\") pod \"nova-cell1-db-create-hcvcv\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.687809 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-60a1-account-create-update-46xff"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.689341 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.692909 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.699964 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bm9dr\" (UniqueName: \"kubernetes.io/projected/5d5a16c6-950f-48e7-b74e-60e6b6292839-kube-api-access-bm9dr\") pod \"nova-cell1-db-create-hcvcv\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.714414 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.742509 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-60a1-account-create-update-46xff"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.777265 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e500689a-cff5-4b5b-a031-a03709fb811d-operator-scripts\") pod \"nova-cell0-60a1-account-create-update-46xff\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.777370 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8lc\" (UniqueName: \"kubernetes.io/projected/e500689a-cff5-4b5b-a031-a03709fb811d-kube-api-access-ng8lc\") pod \"nova-cell0-60a1-account-create-update-46xff\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.845853 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.871054 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-89d2-account-create-update-6kqnb"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.872181 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.883771 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.885446 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e500689a-cff5-4b5b-a031-a03709fb811d-operator-scripts\") pod \"nova-cell0-60a1-account-create-update-46xff\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.887003 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8lc\" (UniqueName: \"kubernetes.io/projected/e500689a-cff5-4b5b-a031-a03709fb811d-kube-api-access-ng8lc\") pod \"nova-cell0-60a1-account-create-update-46xff\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.894153 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e500689a-cff5-4b5b-a031-a03709fb811d-operator-scripts\") pod \"nova-cell0-60a1-account-create-update-46xff\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.910428 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-89d2-account-create-update-6kqnb"] Jan 30 17:18:03 crc kubenswrapper[4712]: I0130 17:18:03.982416 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8lc\" (UniqueName: \"kubernetes.io/projected/e500689a-cff5-4b5b-a031-a03709fb811d-kube-api-access-ng8lc\") pod \"nova-cell0-60a1-account-create-update-46xff\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.009010 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bh946\" (UniqueName: \"kubernetes.io/projected/a83c5d38-374a-4fd6-9f42-d4e39645b82a-kube-api-access-bh946\") pod \"nova-cell1-89d2-account-create-update-6kqnb\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.009079 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83c5d38-374a-4fd6-9f42-d4e39645b82a-operator-scripts\") pod \"nova-cell1-89d2-account-create-update-6kqnb\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.057221 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.112343 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bh946\" (UniqueName: \"kubernetes.io/projected/a83c5d38-374a-4fd6-9f42-d4e39645b82a-kube-api-access-bh946\") pod \"nova-cell1-89d2-account-create-update-6kqnb\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.112418 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83c5d38-374a-4fd6-9f42-d4e39645b82a-operator-scripts\") pod \"nova-cell1-89d2-account-create-update-6kqnb\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.113341 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83c5d38-374a-4fd6-9f42-d4e39645b82a-operator-scripts\") pod \"nova-cell1-89d2-account-create-update-6kqnb\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.159505 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bh946\" (UniqueName: \"kubernetes.io/projected/a83c5d38-374a-4fd6-9f42-d4e39645b82a-kube-api-access-bh946\") pod \"nova-cell1-89d2-account-create-update-6kqnb\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.243770 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.809634 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xd6p5"] Jan 30 17:18:04 crc kubenswrapper[4712]: I0130 17:18:04.989047 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-715d-account-create-update-jq2sd"] Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.074622 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.075068 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.075904 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"8b23f706dbf8aa6538b8c9a023bfa2c07b9d28b0f58e8e9342cd27572ba0c0d2"} pod="openstack/horizon-56f8b66d48-7wr47" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.076008 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" containerID="cri-o://8b23f706dbf8aa6538b8c9a023bfa2c07b9d28b0f58e8e9342cd27572ba0c0d2" gracePeriod=30 Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.084407 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-78xzk"] Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.098124 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-hcvcv"] Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.167464 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-89d2-account-create-update-6kqnb"] Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.254506 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-60a1-account-create-update-46xff"] Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.354564 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.354643 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.355846 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213"} pod="openstack/horizon-64655dbc44-pvj2c" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.355886 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" containerID="cri-o://9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213" gracePeriod=30 Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.397514 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xd6p5" event={"ID":"c96912dc-64a4-4735-91b2-ff0d019b8aa3","Type":"ContainerStarted","Data":"1a816fc67188ceb0908d5991229561e2860e239c71f90e9bd19eb576987549e7"} Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.411227 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hcvcv" event={"ID":"5d5a16c6-950f-48e7-b74e-60e6b6292839","Type":"ContainerStarted","Data":"a23a4c0958788e20e6fa4eb97d8effb6dd4b0cc01b52e731f35785090de22686"} Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.442015 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-715d-account-create-update-jq2sd" event={"ID":"11d41c7b-df2e-492f-8126-1baa68733039","Type":"ContainerStarted","Data":"8299602940f011d50ed3f5f2b21d85b64a6ea118689c5a481a4a6a1b382292fb"} Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.455218 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-60a1-account-create-update-46xff" event={"ID":"e500689a-cff5-4b5b-a031-a03709fb811d","Type":"ContainerStarted","Data":"761d33b41faf60c521decb40770a088a805042acd8402745d85e3b9e4b265153"} Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.483928 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-78xzk" event={"ID":"9802a8ce-ca97-435d-b65a-1618358e986f","Type":"ContainerStarted","Data":"a9e4ae4cefb8fed9e6ed2ec28de61b56bef5caeef01215c400021aa9aca74b25"} Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.495273 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" event={"ID":"a83c5d38-374a-4fd6-9f42-d4e39645b82a","Type":"ContainerStarted","Data":"bf93d69ac0ebfae920c2fc8ba9206be4f54e55ffffd8680dd52c41455f8abf85"} Jan 30 17:18:05 crc kubenswrapper[4712]: I0130 17:18:05.525164 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerStarted","Data":"9090253ae25f410ab835f23ef9bdadf80e5733785e74c11b75186cbfc3237118"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.271203 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.272261 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.272384 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.272958 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2b2080500e3e21108518c785b6a9d42dc4c1501c9ea170a8ffe8ca230910ec5c"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.273079 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://2b2080500e3e21108518c785b6a9d42dc4c1501c9ea170a8ffe8ca230910ec5c" gracePeriod=600 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.536426 4712 generic.go:334] "Generic (PLEG): container finished" podID="5d5a16c6-950f-48e7-b74e-60e6b6292839" containerID="e39d4ac63eb0ecfd0af243192550a0794f079b76e188700544f8d29ed946c213" exitCode=0 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.536531 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hcvcv" event={"ID":"5d5a16c6-950f-48e7-b74e-60e6b6292839","Type":"ContainerDied","Data":"e39d4ac63eb0ecfd0af243192550a0794f079b76e188700544f8d29ed946c213"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.538684 4712 generic.go:334] "Generic (PLEG): container finished" podID="11d41c7b-df2e-492f-8126-1baa68733039" containerID="bb4b0e5720d1e9ed1dcc175c58957f32afd891c9f742c697864262c4a51c7e63" exitCode=0 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.538724 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-715d-account-create-update-jq2sd" event={"ID":"11d41c7b-df2e-492f-8126-1baa68733039","Type":"ContainerDied","Data":"bb4b0e5720d1e9ed1dcc175c58957f32afd891c9f742c697864262c4a51c7e63"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.541987 4712 generic.go:334] "Generic (PLEG): container finished" podID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerID="17d9748dfc29f0d93829a519d709a6dc54f713414c4b13f981fee1b67535dad9" exitCode=0 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.542037 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9","Type":"ContainerDied","Data":"17d9748dfc29f0d93829a519d709a6dc54f713414c4b13f981fee1b67535dad9"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.545397 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="2b2080500e3e21108518c785b6a9d42dc4c1501c9ea170a8ffe8ca230910ec5c" exitCode=0 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.545485 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"2b2080500e3e21108518c785b6a9d42dc4c1501c9ea170a8ffe8ca230910ec5c"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.545535 4712 scope.go:117] "RemoveContainer" containerID="65dbc6a56b610e6c479fb5dd8ad2aa9258f4202d2a0ef57103525088af93b4a2" Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.546869 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-60a1-account-create-update-46xff" event={"ID":"e500689a-cff5-4b5b-a031-a03709fb811d","Type":"ContainerStarted","Data":"f9597b3327b94333ca0b4e138b843698c5cb1c4a92db1a813842f407a5d1d8d7"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.548410 4712 generic.go:334] "Generic (PLEG): container finished" podID="9802a8ce-ca97-435d-b65a-1618358e986f" containerID="4d4d19ab8ff53bb55033e0bc2a56db9628d47ba3610f77f75406fee9f1fadc76" exitCode=0 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.548478 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-78xzk" event={"ID":"9802a8ce-ca97-435d-b65a-1618358e986f","Type":"ContainerDied","Data":"4d4d19ab8ff53bb55033e0bc2a56db9628d47ba3610f77f75406fee9f1fadc76"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.561736 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" event={"ID":"a83c5d38-374a-4fd6-9f42-d4e39645b82a","Type":"ContainerStarted","Data":"e8a403e45589c10bd99804c2ffa645dab7bec60cdf9f427c91f2a72241356e18"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.576718 4712 generic.go:334] "Generic (PLEG): container finished" podID="c96912dc-64a4-4735-91b2-ff0d019b8aa3" containerID="635ba5d0f7f08932037ddc74a516445665ad1b86aad2e0c42bc70a0071655376" exitCode=0 Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.576768 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xd6p5" event={"ID":"c96912dc-64a4-4735-91b2-ff0d019b8aa3","Type":"ContainerDied","Data":"635ba5d0f7f08932037ddc74a516445665ad1b86aad2e0c42bc70a0071655376"} Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.612798 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" podStartSLOduration=3.612776436 podStartE2EDuration="3.612776436s" podCreationTimestamp="2026-01-30 17:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:18:06.610109352 +0000 UTC m=+1423.517118821" watchObservedRunningTime="2026-01-30 17:18:06.612776436 +0000 UTC m=+1423.519785905" Jan 30 17:18:06 crc kubenswrapper[4712]: I0130 17:18:06.651195 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-60a1-account-create-update-46xff" podStartSLOduration=3.651174329 podStartE2EDuration="3.651174329s" podCreationTimestamp="2026-01-30 17:18:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:18:06.640837341 +0000 UTC m=+1423.547846810" watchObservedRunningTime="2026-01-30 17:18:06.651174329 +0000 UTC m=+1423.558183798" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.526641 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640267 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-public-tls-certs\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640336 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-combined-ca-bundle\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640358 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-logs\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640466 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-config-data\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640508 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-scripts\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640549 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-httpd-run\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640647 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.640676 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvn5s\" (UniqueName: \"kubernetes.io/projected/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-kube-api-access-zvn5s\") pod \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\" (UID: \"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9\") " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.656598 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-kube-api-access-zvn5s" (OuterVolumeSpecName: "kube-api-access-zvn5s") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "kube-api-access-zvn5s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.661341 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.675100 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-logs" (OuterVolumeSpecName: "logs") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.711043 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-scripts" (OuterVolumeSpecName: "scripts") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.738421 4712 generic.go:334] "Generic (PLEG): container finished" podID="a83c5d38-374a-4fd6-9f42-d4e39645b82a" containerID="e8a403e45589c10bd99804c2ffa645dab7bec60cdf9f427c91f2a72241356e18" exitCode=0 Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.738516 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" event={"ID":"a83c5d38-374a-4fd6-9f42-d4e39645b82a","Type":"ContainerDied","Data":"e8a403e45589c10bd99804c2ffa645dab7bec60cdf9f427c91f2a72241356e18"} Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.757354 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.784227 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.784261 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvn5s\" (UniqueName: \"kubernetes.io/projected/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-kube-api-access-zvn5s\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.784277 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.773018 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330"} Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.784439 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.810275 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.833624 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-config-data" (OuterVolumeSpecName: "config-data") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.842143 4712 generic.go:334] "Generic (PLEG): container finished" podID="e500689a-cff5-4b5b-a031-a03709fb811d" containerID="f9597b3327b94333ca0b4e138b843698c5cb1c4a92db1a813842f407a5d1d8d7" exitCode=0 Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.870702 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"e2bc1f82-c383-4d0c-8346-3de0bb1a11d9","Type":"ContainerDied","Data":"0579252f9064d6cfdb6e5a88c4853124ac2989beb193c0eadbd0003c84c8e8c2"} Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.870760 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-60a1-account-create-update-46xff" event={"ID":"e500689a-cff5-4b5b-a031-a03709fb811d","Type":"ContainerDied","Data":"f9597b3327b94333ca0b4e138b843698c5cb1c4a92db1a813842f407a5d1d8d7"} Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.870786 4712 scope.go:117] "RemoveContainer" containerID="17d9748dfc29f0d93829a519d709a6dc54f713414c4b13f981fee1b67535dad9" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.898257 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.898302 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.904152 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.974977 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.999736 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:07 crc kubenswrapper[4712]: I0130 17:18:07.999769 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.061737 4712 scope.go:117] "RemoveContainer" containerID="55c7779ed294aab7b328c07c7eb3bab66291697e1db3139b1953c930c941b9fa" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.072349 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" (UID: "e2bc1f82-c383-4d0c-8346-3de0bb1a11d9"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.101290 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.203888 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.279041 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.304929 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:18:08 crc kubenswrapper[4712]: E0130 17:18:08.305327 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-log" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.305344 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-log" Jan 30 17:18:08 crc kubenswrapper[4712]: E0130 17:18:08.305359 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-httpd" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.305366 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-httpd" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.305540 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-httpd" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.305571 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" containerName="glance-log" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.312572 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.316359 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.316692 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.336268 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:18:08 crc kubenswrapper[4712]: E0130 17:18:08.371032 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2bc1f82_c383_4d0c_8346_3de0bb1a11d9.slice/crio-0579252f9064d6cfdb6e5a88c4853124ac2989beb193c0eadbd0003c84c8e8c2\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2bc1f82_c383_4d0c_8346_3de0bb1a11d9.slice\": RecentStats: unable to find data in memory cache]" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521037 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tx7gr\" (UniqueName: \"kubernetes.io/projected/91919356-125c-4caa-8504-a0ead9ce783e-kube-api-access-tx7gr\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521170 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521405 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91919356-125c-4caa-8504-a0ead9ce783e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521464 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521500 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521584 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91919356-125c-4caa-8504-a0ead9ce783e-logs\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521724 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-config-data\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.521811 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-scripts\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627220 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tx7gr\" (UniqueName: \"kubernetes.io/projected/91919356-125c-4caa-8504-a0ead9ce783e-kube-api-access-tx7gr\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627308 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627360 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91919356-125c-4caa-8504-a0ead9ce783e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627405 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627437 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627484 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91919356-125c-4caa-8504-a0ead9ce783e-logs\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627552 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-config-data\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.627595 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-scripts\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.632311 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.632912 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/91919356-125c-4caa-8504-a0ead9ce783e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.633540 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/91919356-125c-4caa-8504-a0ead9ce783e-logs\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.641780 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.647751 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.662560 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-config-data\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.663143 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/91919356-125c-4caa-8504-a0ead9ce783e-scripts\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.680187 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tx7gr\" (UniqueName: \"kubernetes.io/projected/91919356-125c-4caa-8504-a0ead9ce783e-kube-api-access-tx7gr\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.788382 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-external-api-0\" (UID: \"91919356-125c-4caa-8504-a0ead9ce783e\") " pod="openstack/glance-default-external-api-0" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.801313 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.855647 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.916846 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.935458 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bm9dr\" (UniqueName: \"kubernetes.io/projected/5d5a16c6-950f-48e7-b74e-60e6b6292839-kube-api-access-bm9dr\") pod \"5d5a16c6-950f-48e7-b74e-60e6b6292839\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.935655 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a16c6-950f-48e7-b74e-60e6b6292839-operator-scripts\") pod \"5d5a16c6-950f-48e7-b74e-60e6b6292839\" (UID: \"5d5a16c6-950f-48e7-b74e-60e6b6292839\") " Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.935859 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9802a8ce-ca97-435d-b65a-1618358e986f-operator-scripts\") pod \"9802a8ce-ca97-435d-b65a-1618358e986f\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.936043 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ft6fh\" (UniqueName: \"kubernetes.io/projected/9802a8ce-ca97-435d-b65a-1618358e986f-kube-api-access-ft6fh\") pod \"9802a8ce-ca97-435d-b65a-1618358e986f\" (UID: \"9802a8ce-ca97-435d-b65a-1618358e986f\") " Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.942767 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9802a8ce-ca97-435d-b65a-1618358e986f-kube-api-access-ft6fh" (OuterVolumeSpecName: "kube-api-access-ft6fh") pod "9802a8ce-ca97-435d-b65a-1618358e986f" (UID: "9802a8ce-ca97-435d-b65a-1618358e986f"). InnerVolumeSpecName "kube-api-access-ft6fh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.947037 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d5a16c6-950f-48e7-b74e-60e6b6292839-kube-api-access-bm9dr" (OuterVolumeSpecName: "kube-api-access-bm9dr") pod "5d5a16c6-950f-48e7-b74e-60e6b6292839" (UID: "5d5a16c6-950f-48e7-b74e-60e6b6292839"). InnerVolumeSpecName "kube-api-access-bm9dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.947490 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9802a8ce-ca97-435d-b65a-1618358e986f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9802a8ce-ca97-435d-b65a-1618358e986f" (UID: "9802a8ce-ca97-435d-b65a-1618358e986f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.948152 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d5a16c6-950f-48e7-b74e-60e6b6292839-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5d5a16c6-950f-48e7-b74e-60e6b6292839" (UID: "5d5a16c6-950f-48e7-b74e-60e6b6292839"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.948180 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-hcvcv" event={"ID":"5d5a16c6-950f-48e7-b74e-60e6b6292839","Type":"ContainerDied","Data":"a23a4c0958788e20e6fa4eb97d8effb6dd4b0cc01b52e731f35785090de22686"} Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.948208 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a23a4c0958788e20e6fa4eb97d8effb6dd4b0cc01b52e731f35785090de22686" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.948256 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-hcvcv" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.953357 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.957146 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-715d-account-create-update-jq2sd" event={"ID":"11d41c7b-df2e-492f-8126-1baa68733039","Type":"ContainerDied","Data":"8299602940f011d50ed3f5f2b21d85b64a6ea118689c5a481a4a6a1b382292fb"} Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.957186 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8299602940f011d50ed3f5f2b21d85b64a6ea118689c5a481a4a6a1b382292fb" Jan 30 17:18:08 crc kubenswrapper[4712]: I0130 17:18:08.957747 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-715d-account-create-update-jq2sd" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.011329 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-78xzk" event={"ID":"9802a8ce-ca97-435d-b65a-1618358e986f","Type":"ContainerDied","Data":"a9e4ae4cefb8fed9e6ed2ec28de61b56bef5caeef01215c400021aa9aca74b25"} Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.011370 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a9e4ae4cefb8fed9e6ed2ec28de61b56bef5caeef01215c400021aa9aca74b25" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.011445 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-78xzk" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.040676 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktk2p\" (UniqueName: \"kubernetes.io/projected/11d41c7b-df2e-492f-8126-1baa68733039-kube-api-access-ktk2p\") pod \"11d41c7b-df2e-492f-8126-1baa68733039\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.041177 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96912dc-64a4-4735-91b2-ff0d019b8aa3-operator-scripts\") pod \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.041295 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4qn8\" (UniqueName: \"kubernetes.io/projected/c96912dc-64a4-4735-91b2-ff0d019b8aa3-kube-api-access-r4qn8\") pod \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\" (UID: \"c96912dc-64a4-4735-91b2-ff0d019b8aa3\") " Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.041334 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11d41c7b-df2e-492f-8126-1baa68733039-operator-scripts\") pod \"11d41c7b-df2e-492f-8126-1baa68733039\" (UID: \"11d41c7b-df2e-492f-8126-1baa68733039\") " Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042009 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c96912dc-64a4-4735-91b2-ff0d019b8aa3-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c96912dc-64a4-4735-91b2-ff0d019b8aa3" (UID: "c96912dc-64a4-4735-91b2-ff0d019b8aa3"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042451 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bm9dr\" (UniqueName: \"kubernetes.io/projected/5d5a16c6-950f-48e7-b74e-60e6b6292839-kube-api-access-bm9dr\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042467 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5d5a16c6-950f-48e7-b74e-60e6b6292839-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042476 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9802a8ce-ca97-435d-b65a-1618358e986f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042484 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c96912dc-64a4-4735-91b2-ff0d019b8aa3-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042494 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ft6fh\" (UniqueName: \"kubernetes.io/projected/9802a8ce-ca97-435d-b65a-1618358e986f-kube-api-access-ft6fh\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.042892 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11d41c7b-df2e-492f-8126-1baa68733039-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "11d41c7b-df2e-492f-8126-1baa68733039" (UID: "11d41c7b-df2e-492f-8126-1baa68733039"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.049784 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d41c7b-df2e-492f-8126-1baa68733039-kube-api-access-ktk2p" (OuterVolumeSpecName: "kube-api-access-ktk2p") pod "11d41c7b-df2e-492f-8126-1baa68733039" (UID: "11d41c7b-df2e-492f-8126-1baa68733039"). InnerVolumeSpecName "kube-api-access-ktk2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.050490 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-central-agent" containerID="cri-o://fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3" gracePeriod=30 Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.051413 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerStarted","Data":"53a1a945356fd9d7930183ae3af3e979f69b7ced3883a39b1b6ad5531b50ae32"} Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.051789 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="proxy-httpd" containerID="cri-o://53a1a945356fd9d7930183ae3af3e979f69b7ced3883a39b1b6ad5531b50ae32" gracePeriod=30 Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.051891 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="sg-core" containerID="cri-o://9090253ae25f410ab835f23ef9bdadf80e5733785e74c11b75186cbfc3237118" gracePeriod=30 Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.051946 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-notification-agent" containerID="cri-o://3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc" gracePeriod=30 Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.051991 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.055130 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96912dc-64a4-4735-91b2-ff0d019b8aa3-kube-api-access-r4qn8" (OuterVolumeSpecName: "kube-api-access-r4qn8") pod "c96912dc-64a4-4735-91b2-ff0d019b8aa3" (UID: "c96912dc-64a4-4735-91b2-ff0d019b8aa3"). InnerVolumeSpecName "kube-api-access-r4qn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.082470 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.117391 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.089796952 podStartE2EDuration="9.11736882s" podCreationTimestamp="2026-01-30 17:18:00 +0000 UTC" firstStartedPulling="2026-01-30 17:18:01.506279628 +0000 UTC m=+1418.413289097" lastFinishedPulling="2026-01-30 17:18:07.533851496 +0000 UTC m=+1424.440860965" observedRunningTime="2026-01-30 17:18:09.099978393 +0000 UTC m=+1426.006987882" watchObservedRunningTime="2026-01-30 17:18:09.11736882 +0000 UTC m=+1426.024378289" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.144726 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4qn8\" (UniqueName: \"kubernetes.io/projected/c96912dc-64a4-4735-91b2-ff0d019b8aa3-kube-api-access-r4qn8\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.144771 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/11d41c7b-df2e-492f-8126-1baa68733039-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.144784 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktk2p\" (UniqueName: \"kubernetes.io/projected/11d41c7b-df2e-492f-8126-1baa68733039-kube-api-access-ktk2p\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.858044 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.874697 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2bc1f82-c383-4d0c-8346-3de0bb1a11d9" path="/var/lib/kubelet/pods/e2bc1f82-c383-4d0c-8346-3de0bb1a11d9/volumes" Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.994251 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83c5d38-374a-4fd6-9f42-d4e39645b82a-operator-scripts\") pod \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.994656 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bh946\" (UniqueName: \"kubernetes.io/projected/a83c5d38-374a-4fd6-9f42-d4e39645b82a-kube-api-access-bh946\") pod \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\" (UID: \"a83c5d38-374a-4fd6-9f42-d4e39645b82a\") " Jan 30 17:18:09 crc kubenswrapper[4712]: I0130 17:18:09.997451 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a83c5d38-374a-4fd6-9f42-d4e39645b82a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a83c5d38-374a-4fd6-9f42-d4e39645b82a" (UID: "a83c5d38-374a-4fd6-9f42-d4e39645b82a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.074738 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a83c5d38-374a-4fd6-9f42-d4e39645b82a-kube-api-access-bh946" (OuterVolumeSpecName: "kube-api-access-bh946") pod "a83c5d38-374a-4fd6-9f42-d4e39645b82a" (UID: "a83c5d38-374a-4fd6-9f42-d4e39645b82a"). InnerVolumeSpecName "kube-api-access-bh946". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.117515 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bh946\" (UniqueName: \"kubernetes.io/projected/a83c5d38-374a-4fd6-9f42-d4e39645b82a-kube-api-access-bh946\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.117552 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a83c5d38-374a-4fd6-9f42-d4e39645b82a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.135287 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.156404 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" event={"ID":"a83c5d38-374a-4fd6-9f42-d4e39645b82a","Type":"ContainerDied","Data":"bf93d69ac0ebfae920c2fc8ba9206be4f54e55ffffd8680dd52c41455f8abf85"} Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.156446 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf93d69ac0ebfae920c2fc8ba9206be4f54e55ffffd8680dd52c41455f8abf85" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.156537 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-89d2-account-create-update-6kqnb" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.195838 4712 generic.go:334] "Generic (PLEG): container finished" podID="8d1c445c-7242-46a7-88de-707d58473c8f" containerID="9090253ae25f410ab835f23ef9bdadf80e5733785e74c11b75186cbfc3237118" exitCode=2 Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.195951 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerDied","Data":"9090253ae25f410ab835f23ef9bdadf80e5733785e74c11b75186cbfc3237118"} Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.212664 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xd6p5" event={"ID":"c96912dc-64a4-4735-91b2-ff0d019b8aa3","Type":"ContainerDied","Data":"1a816fc67188ceb0908d5991229561e2860e239c71f90e9bd19eb576987549e7"} Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.212738 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a816fc67188ceb0908d5991229561e2860e239c71f90e9bd19eb576987549e7" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.212875 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xd6p5" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.228525 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng8lc\" (UniqueName: \"kubernetes.io/projected/e500689a-cff5-4b5b-a031-a03709fb811d-kube-api-access-ng8lc\") pod \"e500689a-cff5-4b5b-a031-a03709fb811d\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.228651 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e500689a-cff5-4b5b-a031-a03709fb811d-operator-scripts\") pod \"e500689a-cff5-4b5b-a031-a03709fb811d\" (UID: \"e500689a-cff5-4b5b-a031-a03709fb811d\") " Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.235872 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e500689a-cff5-4b5b-a031-a03709fb811d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e500689a-cff5-4b5b-a031-a03709fb811d" (UID: "e500689a-cff5-4b5b-a031-a03709fb811d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.247610 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-60a1-account-create-update-46xff" event={"ID":"e500689a-cff5-4b5b-a031-a03709fb811d","Type":"ContainerDied","Data":"761d33b41faf60c521decb40770a088a805042acd8402745d85e3b9e4b265153"} Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.247713 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="761d33b41faf60c521decb40770a088a805042acd8402745d85e3b9e4b265153" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.247686 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-60a1-account-create-update-46xff" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.248873 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e500689a-cff5-4b5b-a031-a03709fb811d-kube-api-access-ng8lc" (OuterVolumeSpecName: "kube-api-access-ng8lc") pod "e500689a-cff5-4b5b-a031-a03709fb811d" (UID: "e500689a-cff5-4b5b-a031-a03709fb811d"). InnerVolumeSpecName "kube-api-access-ng8lc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.252140 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng8lc\" (UniqueName: \"kubernetes.io/projected/e500689a-cff5-4b5b-a031-a03709fb811d-kube-api-access-ng8lc\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.252174 4712 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e500689a-cff5-4b5b-a031-a03709fb811d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:10 crc kubenswrapper[4712]: I0130 17:18:10.386285 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 30 17:18:11 crc kubenswrapper[4712]: I0130 17:18:11.263247 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91919356-125c-4caa-8504-a0ead9ce783e","Type":"ContainerStarted","Data":"3c5e5885058e39062be7cb5b2d470c02c9f9e0855fb3e0dd947e7f12b561df80"} Jan 30 17:18:12 crc kubenswrapper[4712]: I0130 17:18:12.282766 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91919356-125c-4caa-8504-a0ead9ce783e","Type":"ContainerStarted","Data":"5699743d53195bbfba3d2a8d27565702997b6b9594f617a4d2747c16500b74d6"} Jan 30 17:18:13 crc kubenswrapper[4712]: I0130 17:18:13.295348 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"91919356-125c-4caa-8504-a0ead9ce783e","Type":"ContainerStarted","Data":"b2b2e2f1d100e3b0f767beeb836fa18aca445bd5f954aba2f540c50078e17be2"} Jan 30 17:18:13 crc kubenswrapper[4712]: I0130 17:18:13.324018 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.323993016 podStartE2EDuration="5.323993016s" podCreationTimestamp="2026-01-30 17:18:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:18:13.313972764 +0000 UTC m=+1430.220982233" watchObservedRunningTime="2026-01-30 17:18:13.323993016 +0000 UTC m=+1430.231002495" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.024767 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7mlh5"] Jan 30 17:18:14 crc kubenswrapper[4712]: E0130 17:18:14.025301 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e500689a-cff5-4b5b-a031-a03709fb811d" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025326 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e500689a-cff5-4b5b-a031-a03709fb811d" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: E0130 17:18:14.025341 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a83c5d38-374a-4fd6-9f42-d4e39645b82a" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025350 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a83c5d38-374a-4fd6-9f42-d4e39645b82a" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: E0130 17:18:14.025376 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9802a8ce-ca97-435d-b65a-1618358e986f" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025385 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9802a8ce-ca97-435d-b65a-1618358e986f" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: E0130 17:18:14.025402 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d41c7b-df2e-492f-8126-1baa68733039" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025409 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d41c7b-df2e-492f-8126-1baa68733039" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: E0130 17:18:14.025432 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d5a16c6-950f-48e7-b74e-60e6b6292839" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025440 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d5a16c6-950f-48e7-b74e-60e6b6292839" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: E0130 17:18:14.025452 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96912dc-64a4-4735-91b2-ff0d019b8aa3" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025460 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96912dc-64a4-4735-91b2-ff0d019b8aa3" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025672 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d41c7b-df2e-492f-8126-1baa68733039" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025697 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96912dc-64a4-4735-91b2-ff0d019b8aa3" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025707 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d5a16c6-950f-48e7-b74e-60e6b6292839" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025719 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e500689a-cff5-4b5b-a031-a03709fb811d" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025735 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a83c5d38-374a-4fd6-9f42-d4e39645b82a" containerName="mariadb-account-create-update" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.025752 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9802a8ce-ca97-435d-b65a-1618358e986f" containerName="mariadb-database-create" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.026509 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.029554 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.029770 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-sm2kb" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.029943 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.040824 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7mlh5"] Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.140734 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-config-data\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.140854 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb25\" (UniqueName: \"kubernetes.io/projected/93b068f3-6243-416f-b7d5-4d0eaff334cf-kube-api-access-5kb25\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.140884 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.140916 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-scripts\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.242455 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kb25\" (UniqueName: \"kubernetes.io/projected/93b068f3-6243-416f-b7d5-4d0eaff334cf-kube-api-access-5kb25\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.242521 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.242547 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-scripts\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.242678 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-config-data\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.255329 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.256210 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-config-data\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.258256 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-scripts\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.280325 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kb25\" (UniqueName: \"kubernetes.io/projected/93b068f3-6243-416f-b7d5-4d0eaff334cf-kube-api-access-5kb25\") pod \"nova-cell0-conductor-db-sync-7mlh5\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:14 crc kubenswrapper[4712]: I0130 17:18:14.359307 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:18:15 crc kubenswrapper[4712]: I0130 17:18:15.055156 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7mlh5"] Jan 30 17:18:15 crc kubenswrapper[4712]: I0130 17:18:15.338770 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" event={"ID":"93b068f3-6243-416f-b7d5-4d0eaff334cf","Type":"ContainerStarted","Data":"f1a63a76e2bd392bfd44ff80e3e843258445eab7f7e7a478fa7dbe635ce8c61d"} Jan 30 17:18:16 crc kubenswrapper[4712]: I0130 17:18:16.628533 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:18:16 crc kubenswrapper[4712]: I0130 17:18:16.628870 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-log" containerID="cri-o://0b65a919d5fd7848033183bdcaf4c9c29a02c8eb77e4d57633089c649a534089" gracePeriod=30 Jan 30 17:18:16 crc kubenswrapper[4712]: I0130 17:18:16.628935 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-httpd" containerID="cri-o://65a02972203ca016739170292f0a75267baec64abb325f576c51718e5475b326" gracePeriod=30 Jan 30 17:18:17 crc kubenswrapper[4712]: I0130 17:18:17.366450 4712 generic.go:334] "Generic (PLEG): container finished" podID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerID="0b65a919d5fd7848033183bdcaf4c9c29a02c8eb77e4d57633089c649a534089" exitCode=143 Jan 30 17:18:17 crc kubenswrapper[4712]: I0130 17:18:17.366474 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20ecbbdb-700e-4050-973f-bb7a19df3869","Type":"ContainerDied","Data":"0b65a919d5fd7848033183bdcaf4c9c29a02c8eb77e4d57633089c649a534089"} Jan 30 17:18:19 crc kubenswrapper[4712]: I0130 17:18:19.084059 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:18:19 crc kubenswrapper[4712]: I0130 17:18:19.084544 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 30 17:18:19 crc kubenswrapper[4712]: I0130 17:18:19.881897 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:18:19 crc kubenswrapper[4712]: I0130 17:18:19.882583 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:18:19 crc kubenswrapper[4712]: I0130 17:18:19.895280 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 30 17:18:20 crc kubenswrapper[4712]: I0130 17:18:20.396831 4712 generic.go:334] "Generic (PLEG): container finished" podID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerID="65a02972203ca016739170292f0a75267baec64abb325f576c51718e5475b326" exitCode=0 Jan 30 17:18:20 crc kubenswrapper[4712]: I0130 17:18:20.396901 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20ecbbdb-700e-4050-973f-bb7a19df3869","Type":"ContainerDied","Data":"65a02972203ca016739170292f0a75267baec64abb325f576c51718e5475b326"} Jan 30 17:18:20 crc kubenswrapper[4712]: I0130 17:18:20.397590 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 30 17:18:21 crc kubenswrapper[4712]: I0130 17:18:21.406645 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.274988 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.385007 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.385132 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-httpd-run\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.385228 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-scripts\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.385683 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.385754 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-combined-ca-bundle\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.386141 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvbkf\" (UniqueName: \"kubernetes.io/projected/20ecbbdb-700e-4050-973f-bb7a19df3869-kube-api-access-lvbkf\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.386192 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-internal-tls-certs\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.386261 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-logs\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.386327 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-config-data\") pod \"20ecbbdb-700e-4050-973f-bb7a19df3869\" (UID: \"20ecbbdb-700e-4050-973f-bb7a19df3869\") " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.387046 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.387900 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-logs" (OuterVolumeSpecName: "logs") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.488861 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/20ecbbdb-700e-4050-973f-bb7a19df3869-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.500870 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"20ecbbdb-700e-4050-973f-bb7a19df3869","Type":"ContainerDied","Data":"4f89820a51c504af5397f85748d65dbc037279d4bdcd1c3dbfd07e1d0658e9b0"} Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.500932 4712 scope.go:117] "RemoveContainer" containerID="65a02972203ca016739170292f0a75267baec64abb325f576c51718e5475b326" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.500997 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.513532 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.514984 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ecbbdb-700e-4050-973f-bb7a19df3869-kube-api-access-lvbkf" (OuterVolumeSpecName: "kube-api-access-lvbkf") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "kube-api-access-lvbkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.527221 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-scripts" (OuterVolumeSpecName: "scripts") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.591233 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lvbkf\" (UniqueName: \"kubernetes.io/projected/20ecbbdb-700e-4050-973f-bb7a19df3869-kube-api-access-lvbkf\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.591285 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.591296 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.622140 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.662090 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.669687 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-config-data" (OuterVolumeSpecName: "config-data") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.691748 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "20ecbbdb-700e-4050-973f-bb7a19df3869" (UID: "20ecbbdb-700e-4050-973f-bb7a19df3869"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.693484 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.696947 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.696986 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20ecbbdb-700e-4050-973f-bb7a19df3869-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.697001 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.842547 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.939878 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.953120 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:18:29 crc kubenswrapper[4712]: E0130 17:18:29.953598 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-log" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.953626 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-log" Jan 30 17:18:29 crc kubenswrapper[4712]: E0130 17:18:29.953645 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-httpd" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.953652 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-httpd" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.953882 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-httpd" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.953907 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" containerName="glance-log" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.955149 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.959994 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.959994 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 30 17:18:29 crc kubenswrapper[4712]: I0130 17:18:29.966575 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.104741 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0adde1-5eac-4634-8df8-ff23f73da79b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105112 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105145 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105163 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6r8j\" (UniqueName: \"kubernetes.io/projected/5c0adde1-5eac-4634-8df8-ff23f73da79b-kube-api-access-g6r8j\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105192 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105209 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c0adde1-5eac-4634-8df8-ff23f73da79b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105242 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.105280 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207041 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207093 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c0adde1-5eac-4634-8df8-ff23f73da79b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207147 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207218 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207331 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0adde1-5eac-4634-8df8-ff23f73da79b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207387 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207429 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207452 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6r8j\" (UniqueName: \"kubernetes.io/projected/5c0adde1-5eac-4634-8df8-ff23f73da79b-kube-api-access-g6r8j\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.207643 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.209119 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c0adde1-5eac-4634-8df8-ff23f73da79b-logs\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.211544 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c0adde1-5eac-4634-8df8-ff23f73da79b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.212643 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.213311 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.219465 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.223065 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c0adde1-5eac-4634-8df8-ff23f73da79b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.234595 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6r8j\" (UniqueName: \"kubernetes.io/projected/5c0adde1-5eac-4634-8df8-ff23f73da79b-kube-api-access-g6r8j\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.256743 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-0\" (UID: \"5c0adde1-5eac-4634-8df8-ff23f73da79b\") " pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.281268 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:30 crc kubenswrapper[4712]: I0130 17:18:30.788103 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 30 17:18:31 crc kubenswrapper[4712]: I0130 17:18:31.263908 4712 scope.go:117] "RemoveContainer" containerID="0b65a919d5fd7848033183bdcaf4c9c29a02c8eb77e4d57633089c649a534089" Jan 30 17:18:31 crc kubenswrapper[4712]: E0130 17:18:31.425191 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Jan 30 17:18:31 crc kubenswrapper[4712]: E0130 17:18:31.425590 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5kb25,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-7mlh5_openstack(93b068f3-6243-416f-b7d5-4d0eaff334cf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:18:31 crc kubenswrapper[4712]: E0130 17:18:31.427547 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" podUID="93b068f3-6243-416f-b7d5-4d0eaff334cf" Jan 30 17:18:31 crc kubenswrapper[4712]: E0130 17:18:31.560037 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" podUID="93b068f3-6243-416f-b7d5-4d0eaff334cf" Jan 30 17:18:31 crc kubenswrapper[4712]: I0130 17:18:31.838578 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ecbbdb-700e-4050-973f-bb7a19df3869" path="/var/lib/kubelet/pods/20ecbbdb-700e-4050-973f-bb7a19df3869/volumes" Jan 30 17:18:32 crc kubenswrapper[4712]: I0130 17:18:32.057890 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 30 17:18:32 crc kubenswrapper[4712]: I0130 17:18:32.392737 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:18:32 crc kubenswrapper[4712]: I0130 17:18:32.392914 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:18:32 crc kubenswrapper[4712]: I0130 17:18:32.430887 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 30 17:18:32 crc kubenswrapper[4712]: I0130 17:18:32.581321 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c0adde1-5eac-4634-8df8-ff23f73da79b","Type":"ContainerStarted","Data":"abb80a56dfbb852fdc9d89670d3419a27af667b1221d2a012ac2ce1b25961a3c"} Jan 30 17:18:33 crc kubenswrapper[4712]: I0130 17:18:33.596760 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c0adde1-5eac-4634-8df8-ff23f73da79b","Type":"ContainerStarted","Data":"179d38620a3d4e06df0aebdd7e9e6d96114ad5f480f646df43deaf69ca0d77f1"} Jan 30 17:18:34 crc kubenswrapper[4712]: I0130 17:18:34.607586 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"5c0adde1-5eac-4634-8df8-ff23f73da79b","Type":"ContainerStarted","Data":"dcbef48e097fe83252dbacfa69ec8f00a8c5101ad8d8e88a688c6a8290fa4d30"} Jan 30 17:18:34 crc kubenswrapper[4712]: I0130 17:18:34.649226 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=5.649206261 podStartE2EDuration="5.649206261s" podCreationTimestamp="2026-01-30 17:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:18:34.639845164 +0000 UTC m=+1451.546854633" watchObservedRunningTime="2026-01-30 17:18:34.649206261 +0000 UTC m=+1451.556215730" Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.620537 4712 generic.go:334] "Generic (PLEG): container finished" podID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerID="8b23f706dbf8aa6538b8c9a023bfa2c07b9d28b0f58e8e9342cd27572ba0c0d2" exitCode=137 Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.620722 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"8b23f706dbf8aa6538b8c9a023bfa2c07b9d28b0f58e8e9342cd27572ba0c0d2"} Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.620902 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"7ea359681383c8315f1de54dfb90a6308c6bf781f9821a74bce1f1dbcac99cce"} Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.620924 4712 scope.go:117] "RemoveContainer" containerID="ca8d05a9668753b2823d10544b8f8bbf3f28554634a29614ced82a2e411f15e2" Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.624938 4712 generic.go:334] "Generic (PLEG): container finished" podID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerID="9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213" exitCode=137 Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.625154 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerDied","Data":"9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213"} Jan 30 17:18:35 crc kubenswrapper[4712]: I0130 17:18:35.835338 4712 scope.go:117] "RemoveContainer" containerID="0637c6cf8b9543ce9d09aa9b237dd18cd14c4de10f84d30d44b4a331a3589fa8" Jan 30 17:18:36 crc kubenswrapper[4712]: I0130 17:18:36.636152 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"81106e51e98ee42b57283673e3cf02537243b70df68ffb3d9849db1d90c861a3"} Jan 30 17:18:39 crc kubenswrapper[4712]: I0130 17:18:39.671297 4712 generic.go:334] "Generic (PLEG): container finished" podID="8d1c445c-7242-46a7-88de-707d58473c8f" containerID="53a1a945356fd9d7930183ae3af3e979f69b7ced3883a39b1b6ad5531b50ae32" exitCode=137 Jan 30 17:18:39 crc kubenswrapper[4712]: I0130 17:18:39.671811 4712 generic.go:334] "Generic (PLEG): container finished" podID="8d1c445c-7242-46a7-88de-707d58473c8f" containerID="3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc" exitCode=137 Jan 30 17:18:39 crc kubenswrapper[4712]: I0130 17:18:39.671824 4712 generic.go:334] "Generic (PLEG): container finished" podID="8d1c445c-7242-46a7-88de-707d58473c8f" containerID="fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3" exitCode=137 Jan 30 17:18:39 crc kubenswrapper[4712]: I0130 17:18:39.671372 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerDied","Data":"53a1a945356fd9d7930183ae3af3e979f69b7ced3883a39b1b6ad5531b50ae32"} Jan 30 17:18:39 crc kubenswrapper[4712]: I0130 17:18:39.671855 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerDied","Data":"3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc"} Jan 30 17:18:39 crc kubenswrapper[4712]: I0130 17:18:39.671866 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerDied","Data":"fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3"} Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.052225 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137380 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-scripts\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137438 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-log-httpd\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137599 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69gj8\" (UniqueName: \"kubernetes.io/projected/8d1c445c-7242-46a7-88de-707d58473c8f-kube-api-access-69gj8\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137638 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-combined-ca-bundle\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137735 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-sg-core-conf-yaml\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137783 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-run-httpd\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.137817 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-config-data\") pod \"8d1c445c-7242-46a7-88de-707d58473c8f\" (UID: \"8d1c445c-7242-46a7-88de-707d58473c8f\") " Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.138176 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.138356 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.138762 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.138782 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8d1c445c-7242-46a7-88de-707d58473c8f-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.146779 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d1c445c-7242-46a7-88de-707d58473c8f-kube-api-access-69gj8" (OuterVolumeSpecName: "kube-api-access-69gj8") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "kube-api-access-69gj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.150315 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-scripts" (OuterVolumeSpecName: "scripts") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.232733 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.240386 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69gj8\" (UniqueName: \"kubernetes.io/projected/8d1c445c-7242-46a7-88de-707d58473c8f-kube-api-access-69gj8\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.240420 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.240435 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.270144 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-config-data" (OuterVolumeSpecName: "config-data") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.285179 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.286536 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.298348 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8d1c445c-7242-46a7-88de-707d58473c8f" (UID: "8d1c445c-7242-46a7-88de-707d58473c8f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.332963 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.342100 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.342125 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d1c445c-7242-46a7-88de-707d58473c8f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.342218 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.684466 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.685998 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8d1c445c-7242-46a7-88de-707d58473c8f","Type":"ContainerDied","Data":"0e6e372372086f2994311d191574074df42d413dc76e45b806a96da0326280b6"} Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.686097 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.686126 4712 scope.go:117] "RemoveContainer" containerID="53a1a945356fd9d7930183ae3af3e979f69b7ced3883a39b1b6ad5531b50ae32" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.686458 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.706748 4712 scope.go:117] "RemoveContainer" containerID="9090253ae25f410ab835f23ef9bdadf80e5733785e74c11b75186cbfc3237118" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.719212 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.730256 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.735897 4712 scope.go:117] "RemoveContainer" containerID="3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.754237 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:40 crc kubenswrapper[4712]: E0130 17:18:40.756568 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-central-agent" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756598 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-central-agent" Jan 30 17:18:40 crc kubenswrapper[4712]: E0130 17:18:40.756612 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="sg-core" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756619 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="sg-core" Jan 30 17:18:40 crc kubenswrapper[4712]: E0130 17:18:40.756646 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="proxy-httpd" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756655 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="proxy-httpd" Jan 30 17:18:40 crc kubenswrapper[4712]: E0130 17:18:40.756673 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-notification-agent" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756678 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-notification-agent" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756952 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-central-agent" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756971 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="ceilometer-notification-agent" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.756979 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="sg-core" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.757001 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" containerName="proxy-httpd" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.758645 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.763031 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.763971 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.788643 4712 scope.go:117] "RemoveContainer" containerID="fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.830452 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.854894 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.854963 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkbt8\" (UniqueName: \"kubernetes.io/projected/26ed421b-3be4-4e54-a45a-238d6a683ccc-kube-api-access-xkbt8\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.855017 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.855095 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-run-httpd\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.855117 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-scripts\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.855138 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-config-data\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.855156 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-log-httpd\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.957040 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.957138 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkbt8\" (UniqueName: \"kubernetes.io/projected/26ed421b-3be4-4e54-a45a-238d6a683ccc-kube-api-access-xkbt8\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.957207 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.957316 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-run-httpd\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.957347 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-scripts\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.961782 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-config-data\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.961787 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-run-httpd\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.961859 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-log-httpd\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.962986 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-log-httpd\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.969691 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-scripts\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.977419 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.987030 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-config-data\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.988164 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:40 crc kubenswrapper[4712]: I0130 17:18:40.996479 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkbt8\" (UniqueName: \"kubernetes.io/projected/26ed421b-3be4-4e54-a45a-238d6a683ccc-kube-api-access-xkbt8\") pod \"ceilometer-0\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " pod="openstack/ceilometer-0" Jan 30 17:18:41 crc kubenswrapper[4712]: I0130 17:18:41.091124 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:18:41 crc kubenswrapper[4712]: I0130 17:18:41.659488 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:41 crc kubenswrapper[4712]: I0130 17:18:41.705672 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerStarted","Data":"6e4151af5f37f722cd20077977befda25b3622cf479d0ebea97143a51690ea6f"} Jan 30 17:18:41 crc kubenswrapper[4712]: I0130 17:18:41.825303 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d1c445c-7242-46a7-88de-707d58473c8f" path="/var/lib/kubelet/pods/8d1c445c-7242-46a7-88de-707d58473c8f/volumes" Jan 30 17:18:42 crc kubenswrapper[4712]: E0130 17:18:42.044243 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-0e6e372372086f2994311d191574074df42d413dc76e45b806a96da0326280b6\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a28b495_ecf0_409e_9558_ee794a46dbd1.slice/crio-conmon-9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-conmon-53a1a945356fd9d7930183ae3af3e979f69b7ced3883a39b1b6ad5531b50ae32.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3199e2b6_4450_48fb_9809_3467dce0d5bd.slice/crio-conmon-f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8a0938_d2f2_47bc_b923_fdcba236851f.slice/crio-4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0f8a0938_d2f2_47bc_b923_fdcba236851f.slice/crio-conmon-4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-conmon-fec133130f48716d2c2b85c71c1a24f507671f65914cfe3066ac4f2f5e9328a3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d1c445c_7242_46a7_88de_707d58473c8f.slice/crio-conmon-3f28c10702c7dfff95ed44c675eb510a354448328728df9c01f6db506a7af9dc.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3199e2b6_4450_48fb_9809_3467dce0d5bd.slice/crio-f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6a28b495_ecf0_409e_9558_ee794a46dbd1.slice/crio-9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:18:42 crc kubenswrapper[4712]: I0130 17:18:42.724167 4712 generic.go:334] "Generic (PLEG): container finished" podID="3199e2b6-4450-48fb-9809-3467dce0d5bd" containerID="f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513" exitCode=137 Jan 30 17:18:42 crc kubenswrapper[4712]: I0130 17:18:42.724219 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ff85c4bb5-kfdkk" event={"ID":"3199e2b6-4450-48fb-9809-3467dce0d5bd","Type":"ContainerDied","Data":"f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513"} Jan 30 17:18:42 crc kubenswrapper[4712]: I0130 17:18:42.725508 4712 generic.go:334] "Generic (PLEG): container finished" podID="0f8a0938-d2f2-47bc-b923-fdcba236851f" containerID="4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb" exitCode=137 Jan 30 17:18:42 crc kubenswrapper[4712]: I0130 17:18:42.725890 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:18:42 crc kubenswrapper[4712]: I0130 17:18:42.725904 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:18:42 crc kubenswrapper[4712]: I0130 17:18:42.725657 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" event={"ID":"0f8a0938-d2f2-47bc-b923-fdcba236851f","Type":"ContainerDied","Data":"4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb"} Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.093455 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.098487 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.217893 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-combined-ca-bundle\") pod \"3199e2b6-4450-48fb-9809-3467dce0d5bd\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218206 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data-custom\") pod \"0f8a0938-d2f2-47bc-b923-fdcba236851f\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218296 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data\") pod \"3199e2b6-4450-48fb-9809-3467dce0d5bd\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218397 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data-custom\") pod \"3199e2b6-4450-48fb-9809-3467dce0d5bd\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218488 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2r6k\" (UniqueName: \"kubernetes.io/projected/3199e2b6-4450-48fb-9809-3467dce0d5bd-kube-api-access-s2r6k\") pod \"3199e2b6-4450-48fb-9809-3467dce0d5bd\" (UID: \"3199e2b6-4450-48fb-9809-3467dce0d5bd\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218573 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data\") pod \"0f8a0938-d2f2-47bc-b923-fdcba236851f\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218780 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-combined-ca-bundle\") pod \"0f8a0938-d2f2-47bc-b923-fdcba236851f\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.218998 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t9j6f\" (UniqueName: \"kubernetes.io/projected/0f8a0938-d2f2-47bc-b923-fdcba236851f-kube-api-access-t9j6f\") pod \"0f8a0938-d2f2-47bc-b923-fdcba236851f\" (UID: \"0f8a0938-d2f2-47bc-b923-fdcba236851f\") " Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.229174 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3199e2b6-4450-48fb-9809-3467dce0d5bd-kube-api-access-s2r6k" (OuterVolumeSpecName: "kube-api-access-s2r6k") pod "3199e2b6-4450-48fb-9809-3467dce0d5bd" (UID: "3199e2b6-4450-48fb-9809-3467dce0d5bd"). InnerVolumeSpecName "kube-api-access-s2r6k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.230967 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3199e2b6-4450-48fb-9809-3467dce0d5bd" (UID: "3199e2b6-4450-48fb-9809-3467dce0d5bd"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.231940 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0f8a0938-d2f2-47bc-b923-fdcba236851f" (UID: "0f8a0938-d2f2-47bc-b923-fdcba236851f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.243111 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f8a0938-d2f2-47bc-b923-fdcba236851f-kube-api-access-t9j6f" (OuterVolumeSpecName: "kube-api-access-t9j6f") pod "0f8a0938-d2f2-47bc-b923-fdcba236851f" (UID: "0f8a0938-d2f2-47bc-b923-fdcba236851f"). InnerVolumeSpecName "kube-api-access-t9j6f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.261184 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3199e2b6-4450-48fb-9809-3467dce0d5bd" (UID: "3199e2b6-4450-48fb-9809-3467dce0d5bd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.267584 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f8a0938-d2f2-47bc-b923-fdcba236851f" (UID: "0f8a0938-d2f2-47bc-b923-fdcba236851f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.319282 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data" (OuterVolumeSpecName: "config-data") pod "3199e2b6-4450-48fb-9809-3467dce0d5bd" (UID: "3199e2b6-4450-48fb-9809-3467dce0d5bd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.321997 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.322041 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t9j6f\" (UniqueName: \"kubernetes.io/projected/0f8a0938-d2f2-47bc-b923-fdcba236851f-kube-api-access-t9j6f\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.322054 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.322062 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.322071 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.322081 4712 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3199e2b6-4450-48fb-9809-3467dce0d5bd-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.322089 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2r6k\" (UniqueName: \"kubernetes.io/projected/3199e2b6-4450-48fb-9809-3467dce0d5bd-kube-api-access-s2r6k\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.336782 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data" (OuterVolumeSpecName: "config-data") pod "0f8a0938-d2f2-47bc-b923-fdcba236851f" (UID: "0f8a0938-d2f2-47bc-b923-fdcba236851f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.423917 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0f8a0938-d2f2-47bc-b923-fdcba236851f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.735972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-7ff85c4bb5-kfdkk" event={"ID":"3199e2b6-4450-48fb-9809-3467dce0d5bd","Type":"ContainerDied","Data":"acd8dcc19b33fc41c3cc2ab9551581e288e4779552da946d1af89a49055334fb"} Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.736314 4712 scope.go:117] "RemoveContainer" containerID="f4e6333d0e34f16d543aef267504c576483d478e0a8bb4f8a20eec74f5fcb513" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.735989 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-7ff85c4bb5-kfdkk" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.740178 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" event={"ID":"0f8a0938-d2f2-47bc-b923-fdcba236851f","Type":"ContainerDied","Data":"a20f6094d4f98f0da0799020de8b32f52bf40c1d427f819f92a274b432131991"} Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.740238 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-5cfd5b7746-whcck" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.772962 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-7ff85c4bb5-kfdkk"] Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.802299 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-7ff85c4bb5-kfdkk"] Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.824964 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3199e2b6-4450-48fb-9809-3467dce0d5bd" path="/var/lib/kubelet/pods/3199e2b6-4450-48fb-9809-3467dce0d5bd/volumes" Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.825610 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-5cfd5b7746-whcck"] Jan 30 17:18:43 crc kubenswrapper[4712]: I0130 17:18:43.831959 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-5cfd5b7746-whcck"] Jan 30 17:18:44 crc kubenswrapper[4712]: I0130 17:18:44.033662 4712 scope.go:117] "RemoveContainer" containerID="4a4c4ec02a0427f7fe4cee163725854c924b4a836c6baefb2bc9c6831f330cdb" Jan 30 17:18:44 crc kubenswrapper[4712]: I0130 17:18:44.782336 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerStarted","Data":"a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5"} Jan 30 17:18:44 crc kubenswrapper[4712]: I0130 17:18:44.785991 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" event={"ID":"93b068f3-6243-416f-b7d5-4d0eaff334cf","Type":"ContainerStarted","Data":"a654632715e5ae6e76f3e63bb9ef2c566815550772d17f51224f70eb5b9e515b"} Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.072661 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.073036 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.073968 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.354184 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.354490 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.355747 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.636438 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.636763 4712 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.669195 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" podStartSLOduration=3.625615928 podStartE2EDuration="32.669175845s" podCreationTimestamp="2026-01-30 17:18:13 +0000 UTC" firstStartedPulling="2026-01-30 17:18:15.064151117 +0000 UTC m=+1431.971160586" lastFinishedPulling="2026-01-30 17:18:44.107711034 +0000 UTC m=+1461.014720503" observedRunningTime="2026-01-30 17:18:44.806946473 +0000 UTC m=+1461.713955952" watchObservedRunningTime="2026-01-30 17:18:45.669175845 +0000 UTC m=+1462.576185314" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.673923 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.822919 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f8a0938-d2f2-47bc-b923-fdcba236851f" path="/var/lib/kubelet/pods/0f8a0938-d2f2-47bc-b923-fdcba236851f/volumes" Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.823711 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerStarted","Data":"22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5"} Jan 30 17:18:45 crc kubenswrapper[4712]: I0130 17:18:45.823735 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerStarted","Data":"b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2"} Jan 30 17:18:46 crc kubenswrapper[4712]: I0130 17:18:46.636422 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.858448 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerStarted","Data":"cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d"} Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.858637 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-central-agent" containerID="cri-o://a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5" gracePeriod=30 Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.858725 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="sg-core" containerID="cri-o://22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5" gracePeriod=30 Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.858785 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-notification-agent" containerID="cri-o://b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2" gracePeriod=30 Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.858747 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="proxy-httpd" containerID="cri-o://cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d" gracePeriod=30 Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.858884 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:18:49 crc kubenswrapper[4712]: I0130 17:18:49.882644 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.895857649 podStartE2EDuration="9.882629469s" podCreationTimestamp="2026-01-30 17:18:40 +0000 UTC" firstStartedPulling="2026-01-30 17:18:41.671004338 +0000 UTC m=+1458.578013807" lastFinishedPulling="2026-01-30 17:18:48.657776148 +0000 UTC m=+1465.564785627" observedRunningTime="2026-01-30 17:18:49.880290972 +0000 UTC m=+1466.787300441" watchObservedRunningTime="2026-01-30 17:18:49.882629469 +0000 UTC m=+1466.789638938" Jan 30 17:18:50 crc kubenswrapper[4712]: I0130 17:18:50.869454 4712 generic.go:334] "Generic (PLEG): container finished" podID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerID="22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5" exitCode=2 Jan 30 17:18:50 crc kubenswrapper[4712]: I0130 17:18:50.869741 4712 generic.go:334] "Generic (PLEG): container finished" podID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerID="b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2" exitCode=0 Jan 30 17:18:50 crc kubenswrapper[4712]: I0130 17:18:50.869501 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerDied","Data":"22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5"} Jan 30 17:18:50 crc kubenswrapper[4712]: I0130 17:18:50.869784 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerDied","Data":"b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2"} Jan 30 17:18:51 crc kubenswrapper[4712]: I0130 17:18:51.882212 4712 generic.go:334] "Generic (PLEG): container finished" podID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerID="cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d" exitCode=0 Jan 30 17:18:51 crc kubenswrapper[4712]: I0130 17:18:51.882303 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerDied","Data":"cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d"} Jan 30 17:18:55 crc kubenswrapper[4712]: I0130 17:18:55.072907 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:18:55 crc kubenswrapper[4712]: I0130 17:18:55.353369 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:18:57 crc kubenswrapper[4712]: I0130 17:18:57.259122 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 17:18:57 crc kubenswrapper[4712]: I0130 17:18:57.259471 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 17:18:57 crc kubenswrapper[4712]: I0130 17:18:57.372729 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 17:18:57 crc kubenswrapper[4712]: I0130 17:18:57.372766 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 17:18:57 crc kubenswrapper[4712]: I0130 17:18:57.372815 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 17:18:57 crc kubenswrapper[4712]: I0130 17:18:57.372850 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:19:01 crc kubenswrapper[4712]: I0130 17:19:01.130014 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.176:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.072861 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.073136 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.073864 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"7ea359681383c8315f1de54dfb90a6308c6bf781f9821a74bce1f1dbcac99cce"} pod="openstack/horizon-56f8b66d48-7wr47" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.074079 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" containerID="cri-o://7ea359681383c8315f1de54dfb90a6308c6bf781f9821a74bce1f1dbcac99cce" gracePeriod=30 Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.352269 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-449vr"] Jan 30 17:19:05 crc kubenswrapper[4712]: E0130 17:19:05.353055 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f8a0938-d2f2-47bc-b923-fdcba236851f" containerName="heat-cfnapi" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.353080 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f8a0938-d2f2-47bc-b923-fdcba236851f" containerName="heat-cfnapi" Jan 30 17:19:05 crc kubenswrapper[4712]: E0130 17:19:05.353106 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3199e2b6-4450-48fb-9809-3467dce0d5bd" containerName="heat-api" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.353114 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3199e2b6-4450-48fb-9809-3467dce0d5bd" containerName="heat-api" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.353327 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3199e2b6-4450-48fb-9809-3467dce0d5bd" containerName="heat-api" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.353363 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f8a0938-d2f2-47bc-b923-fdcba236851f" containerName="heat-cfnapi" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.354034 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.355084 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.355200 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.356393 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"81106e51e98ee42b57283673e3cf02537243b70df68ffb3d9849db1d90c861a3"} pod="openstack/horizon-64655dbc44-pvj2c" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.356442 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" containerID="cri-o://81106e51e98ee42b57283673e3cf02537243b70df68ffb3d9849db1d90c861a3" gracePeriod=30 Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.369221 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-449vr"] Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.482440 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-utilities\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.482574 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-catalog-content\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.482641 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xnxh6\" (UniqueName: \"kubernetes.io/projected/92c3acde-acd8-4e20-ac11-5383b83fe945-kube-api-access-xnxh6\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.583996 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-catalog-content\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.584077 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xnxh6\" (UniqueName: \"kubernetes.io/projected/92c3acde-acd8-4e20-ac11-5383b83fe945-kube-api-access-xnxh6\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.584155 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-utilities\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.584665 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-utilities\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.584908 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-catalog-content\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.605629 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xnxh6\" (UniqueName: \"kubernetes.io/projected/92c3acde-acd8-4e20-ac11-5383b83fe945-kube-api-access-xnxh6\") pod \"redhat-operators-449vr\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:05 crc kubenswrapper[4712]: I0130 17:19:05.673051 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:06 crc kubenswrapper[4712]: I0130 17:19:06.172090 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerName="cinder-scheduler" probeResult="failure" output="Get \"http://10.217.0.176:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:19:06 crc kubenswrapper[4712]: I0130 17:19:06.228331 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-449vr"] Jan 30 17:19:07 crc kubenswrapper[4712]: I0130 17:19:07.039401 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerStarted","Data":"d40d91455dfb746ef2679f8279c259c8c31495dff9639df7da71e02e4f5d3f4a"} Jan 30 17:19:09 crc kubenswrapper[4712]: I0130 17:19:09.059231 4712 generic.go:334] "Generic (PLEG): container finished" podID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerID="650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c" exitCode=0 Jan 30 17:19:09 crc kubenswrapper[4712]: I0130 17:19:09.059309 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerDied","Data":"650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c"} Jan 30 17:19:10 crc kubenswrapper[4712]: I0130 17:19:10.070548 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerStarted","Data":"d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719"} Jan 30 17:19:11 crc kubenswrapper[4712]: I0130 17:19:11.093179 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.200:3000/\": dial tcp 10.217.0.200:3000: connect: connection refused" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.694993 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793489 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xkbt8\" (UniqueName: \"kubernetes.io/projected/26ed421b-3be4-4e54-a45a-238d6a683ccc-kube-api-access-xkbt8\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793647 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-scripts\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793691 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-log-httpd\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793736 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-sg-core-conf-yaml\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793765 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-config-data\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793877 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-run-httpd\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.793930 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-combined-ca-bundle\") pod \"26ed421b-3be4-4e54-a45a-238d6a683ccc\" (UID: \"26ed421b-3be4-4e54-a45a-238d6a683ccc\") " Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.794377 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.794578 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.805032 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ed421b-3be4-4e54-a45a-238d6a683ccc-kube-api-access-xkbt8" (OuterVolumeSpecName: "kube-api-access-xkbt8") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "kube-api-access-xkbt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.828842 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-scripts" (OuterVolumeSpecName: "scripts") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.859763 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.905443 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xkbt8\" (UniqueName: \"kubernetes.io/projected/26ed421b-3be4-4e54-a45a-238d6a683ccc-kube-api-access-xkbt8\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.905475 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.905490 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.905501 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.905513 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ed421b-3be4-4e54-a45a-238d6a683ccc-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.940969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:15 crc kubenswrapper[4712]: I0130 17:19:15.957725 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-config-data" (OuterVolumeSpecName: "config-data") pod "26ed421b-3be4-4e54-a45a-238d6a683ccc" (UID: "26ed421b-3be4-4e54-a45a-238d6a683ccc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.007001 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.007036 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ed421b-3be4-4e54-a45a-238d6a683ccc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.120661 4712 generic.go:334] "Generic (PLEG): container finished" podID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerID="a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5" exitCode=0 Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.120730 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.121147 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerDied","Data":"a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5"} Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.121260 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"26ed421b-3be4-4e54-a45a-238d6a683ccc","Type":"ContainerDied","Data":"6e4151af5f37f722cd20077977befda25b3622cf479d0ebea97143a51690ea6f"} Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.121342 4712 scope.go:117] "RemoveContainer" containerID="cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.160856 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.169170 4712 scope.go:117] "RemoveContainer" containerID="22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.188595 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.198912 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.199691 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="sg-core" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.199823 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="sg-core" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.199915 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-notification-agent" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.199994 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-notification-agent" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.200088 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="proxy-httpd" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.200157 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="proxy-httpd" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.200253 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-central-agent" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.200335 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-central-agent" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.200654 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-notification-agent" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.200756 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="ceilometer-central-agent" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.200877 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="sg-core" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.200963 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" containerName="proxy-httpd" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.203346 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.204784 4712 scope.go:117] "RemoveContainer" containerID="b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.205780 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.209251 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.211719 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.315704 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-log-httpd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.315781 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-run-httpd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.316139 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bzvd\" (UniqueName: \"kubernetes.io/projected/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-kube-api-access-6bzvd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.316211 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.316287 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-config-data\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.316331 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.316390 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-scripts\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.316400 4712 scope.go:117] "RemoveContainer" containerID="a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.340489 4712 scope.go:117] "RemoveContainer" containerID="cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.340948 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d\": container with ID starting with cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d not found: ID does not exist" containerID="cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.340976 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d"} err="failed to get container status \"cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d\": rpc error: code = NotFound desc = could not find container \"cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d\": container with ID starting with cdd74a531d95ecfa5028ad9be4a2941241c18ca840906972954249874b05479d not found: ID does not exist" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.340995 4712 scope.go:117] "RemoveContainer" containerID="22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.341254 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5\": container with ID starting with 22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5 not found: ID does not exist" containerID="22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.341297 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5"} err="failed to get container status \"22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5\": rpc error: code = NotFound desc = could not find container \"22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5\": container with ID starting with 22c7dcf601672275957e3def49af0f571a96e791cf85306e622e850c3fbc32c5 not found: ID does not exist" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.341312 4712 scope.go:117] "RemoveContainer" containerID="b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.341951 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2\": container with ID starting with b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2 not found: ID does not exist" containerID="b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.341972 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2"} err="failed to get container status \"b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2\": rpc error: code = NotFound desc = could not find container \"b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2\": container with ID starting with b7af51c9c3e1699c3a997a7a8497c7d891efd2387f717bb31adc033c05b545d2 not found: ID does not exist" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.341985 4712 scope.go:117] "RemoveContainer" containerID="a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5" Jan 30 17:19:16 crc kubenswrapper[4712]: E0130 17:19:16.342433 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5\": container with ID starting with a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5 not found: ID does not exist" containerID="a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.342491 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5"} err="failed to get container status \"a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5\": rpc error: code = NotFound desc = could not find container \"a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5\": container with ID starting with a9da71518fb05943a6adab746a24565f93e65ae387735490f3986e743d0f41b5 not found: ID does not exist" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418321 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-log-httpd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418674 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-run-httpd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418718 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bzvd\" (UniqueName: \"kubernetes.io/projected/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-kube-api-access-6bzvd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418763 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-log-httpd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418851 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418883 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-config-data\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418906 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.418933 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-scripts\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.419087 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-run-httpd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.430156 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.431169 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-config-data\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.436643 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.437191 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bzvd\" (UniqueName: \"kubernetes.io/projected/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-kube-api-access-6bzvd\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.437713 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-scripts\") pod \"ceilometer-0\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " pod="openstack/ceilometer-0" Jan 30 17:19:16 crc kubenswrapper[4712]: I0130 17:19:16.521757 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:17 crc kubenswrapper[4712]: I0130 17:19:17.044081 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:17 crc kubenswrapper[4712]: I0130 17:19:17.133414 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerStarted","Data":"4baa03a1581e0efcc21cee2fdf5203d01143c27bf962dac4f3770f9c8aa5af4a"} Jan 30 17:19:17 crc kubenswrapper[4712]: I0130 17:19:17.832139 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ed421b-3be4-4e54-a45a-238d6a683ccc" path="/var/lib/kubelet/pods/26ed421b-3be4-4e54-a45a-238d6a683ccc/volumes" Jan 30 17:19:18 crc kubenswrapper[4712]: I0130 17:19:18.144847 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerStarted","Data":"dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb"} Jan 30 17:19:18 crc kubenswrapper[4712]: I0130 17:19:18.890171 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:19 crc kubenswrapper[4712]: I0130 17:19:19.156377 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerStarted","Data":"01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5"} Jan 30 17:19:20 crc kubenswrapper[4712]: I0130 17:19:20.178696 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerStarted","Data":"20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42"} Jan 30 17:19:20 crc kubenswrapper[4712]: I0130 17:19:20.183897 4712 generic.go:334] "Generic (PLEG): container finished" podID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerID="d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719" exitCode=0 Jan 30 17:19:20 crc kubenswrapper[4712]: I0130 17:19:20.184080 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerDied","Data":"d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719"} Jan 30 17:19:21 crc kubenswrapper[4712]: I0130 17:19:21.195780 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerStarted","Data":"269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9"} Jan 30 17:19:21 crc kubenswrapper[4712]: I0130 17:19:21.232611 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-449vr" podStartSLOduration=4.627598186 podStartE2EDuration="16.232593239s" podCreationTimestamp="2026-01-30 17:19:05 +0000 UTC" firstStartedPulling="2026-01-30 17:19:09.061057626 +0000 UTC m=+1485.968067095" lastFinishedPulling="2026-01-30 17:19:20.666052679 +0000 UTC m=+1497.573062148" observedRunningTime="2026-01-30 17:19:21.218323384 +0000 UTC m=+1498.125332853" watchObservedRunningTime="2026-01-30 17:19:21.232593239 +0000 UTC m=+1498.139602708" Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.229919 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerStarted","Data":"e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e"} Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.230463 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.230169 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="sg-core" containerID="cri-o://20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42" gracePeriod=30 Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.230133 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-central-agent" containerID="cri-o://dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb" gracePeriod=30 Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.230203 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-notification-agent" containerID="cri-o://01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5" gracePeriod=30 Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.230189 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="proxy-httpd" containerID="cri-o://e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e" gracePeriod=30 Jan 30 17:19:23 crc kubenswrapper[4712]: I0130 17:19:23.264633 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.344024415 podStartE2EDuration="7.264615669s" podCreationTimestamp="2026-01-30 17:19:16 +0000 UTC" firstStartedPulling="2026-01-30 17:19:17.045285539 +0000 UTC m=+1493.952295028" lastFinishedPulling="2026-01-30 17:19:21.965876813 +0000 UTC m=+1498.872886282" observedRunningTime="2026-01-30 17:19:23.264029445 +0000 UTC m=+1500.171038924" watchObservedRunningTime="2026-01-30 17:19:23.264615669 +0000 UTC m=+1500.171625138" Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.239359 4712 generic.go:334] "Generic (PLEG): container finished" podID="93b068f3-6243-416f-b7d5-4d0eaff334cf" containerID="a654632715e5ae6e76f3e63bb9ef2c566815550772d17f51224f70eb5b9e515b" exitCode=0 Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.239434 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" event={"ID":"93b068f3-6243-416f-b7d5-4d0eaff334cf","Type":"ContainerDied","Data":"a654632715e5ae6e76f3e63bb9ef2c566815550772d17f51224f70eb5b9e515b"} Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.243353 4712 generic.go:334] "Generic (PLEG): container finished" podID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerID="e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e" exitCode=0 Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.243389 4712 generic.go:334] "Generic (PLEG): container finished" podID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerID="20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42" exitCode=2 Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.243399 4712 generic.go:334] "Generic (PLEG): container finished" podID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerID="01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5" exitCode=0 Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.243422 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerDied","Data":"e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e"} Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.243447 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerDied","Data":"20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42"} Jan 30 17:19:24 crc kubenswrapper[4712]: I0130 17:19:24.243458 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerDied","Data":"01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5"} Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.670881 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.673926 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.673957 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.808354 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-combined-ca-bundle\") pod \"93b068f3-6243-416f-b7d5-4d0eaff334cf\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.808662 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-config-data\") pod \"93b068f3-6243-416f-b7d5-4d0eaff334cf\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.808911 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-scripts\") pod \"93b068f3-6243-416f-b7d5-4d0eaff334cf\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.808932 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb25\" (UniqueName: \"kubernetes.io/projected/93b068f3-6243-416f-b7d5-4d0eaff334cf-kube-api-access-5kb25\") pod \"93b068f3-6243-416f-b7d5-4d0eaff334cf\" (UID: \"93b068f3-6243-416f-b7d5-4d0eaff334cf\") " Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.818116 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93b068f3-6243-416f-b7d5-4d0eaff334cf-kube-api-access-5kb25" (OuterVolumeSpecName: "kube-api-access-5kb25") pod "93b068f3-6243-416f-b7d5-4d0eaff334cf" (UID: "93b068f3-6243-416f-b7d5-4d0eaff334cf"). InnerVolumeSpecName "kube-api-access-5kb25". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.827724 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-scripts" (OuterVolumeSpecName: "scripts") pod "93b068f3-6243-416f-b7d5-4d0eaff334cf" (UID: "93b068f3-6243-416f-b7d5-4d0eaff334cf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.892411 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93b068f3-6243-416f-b7d5-4d0eaff334cf" (UID: "93b068f3-6243-416f-b7d5-4d0eaff334cf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.897436 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-config-data" (OuterVolumeSpecName: "config-data") pod "93b068f3-6243-416f-b7d5-4d0eaff334cf" (UID: "93b068f3-6243-416f-b7d5-4d0eaff334cf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.912009 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.912044 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kb25\" (UniqueName: \"kubernetes.io/projected/93b068f3-6243-416f-b7d5-4d0eaff334cf-kube-api-access-5kb25\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.912060 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:25 crc kubenswrapper[4712]: I0130 17:19:25.912072 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93b068f3-6243-416f-b7d5-4d0eaff334cf-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.066419 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:19:26 crc kubenswrapper[4712]: E0130 17:19:26.066775 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93b068f3-6243-416f-b7d5-4d0eaff334cf" containerName="nova-cell0-conductor-db-sync" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.066814 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="93b068f3-6243-416f-b7d5-4d0eaff334cf" containerName="nova-cell0-conductor-db-sync" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.067011 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="93b068f3-6243-416f-b7d5-4d0eaff334cf" containerName="nova-cell0-conductor-db-sync" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.067578 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.110564 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.218146 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a0f5cf-d830-475d-bded-4975230ef33a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.218277 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29h76\" (UniqueName: \"kubernetes.io/projected/76a0f5cf-d830-475d-bded-4975230ef33a-kube-api-access-29h76\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.218539 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a0f5cf-d830-475d-bded-4975230ef33a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.261863 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" event={"ID":"93b068f3-6243-416f-b7d5-4d0eaff334cf","Type":"ContainerDied","Data":"f1a63a76e2bd392bfd44ff80e3e843258445eab7f7e7a478fa7dbe635ce8c61d"} Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.261908 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1a63a76e2bd392bfd44ff80e3e843258445eab7f7e7a478fa7dbe635ce8c61d" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.261986 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-7mlh5" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.320344 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a0f5cf-d830-475d-bded-4975230ef33a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.320431 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-29h76\" (UniqueName: \"kubernetes.io/projected/76a0f5cf-d830-475d-bded-4975230ef33a-kube-api-access-29h76\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.320497 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a0f5cf-d830-475d-bded-4975230ef33a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.325956 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76a0f5cf-d830-475d-bded-4975230ef33a-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.326720 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a0f5cf-d830-475d-bded-4975230ef33a-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.349274 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-29h76\" (UniqueName: \"kubernetes.io/projected/76a0f5cf-d830-475d-bded-4975230ef33a-kube-api-access-29h76\") pod \"nova-cell0-conductor-0\" (UID: \"76a0f5cf-d830-475d-bded-4975230ef33a\") " pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.407534 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.734904 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-449vr" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="registry-server" probeResult="failure" output=< Jan 30 17:19:26 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:19:26 crc kubenswrapper[4712]: > Jan 30 17:19:26 crc kubenswrapper[4712]: I0130 17:19:26.913159 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 30 17:19:27 crc kubenswrapper[4712]: I0130 17:19:27.273155 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76a0f5cf-d830-475d-bded-4975230ef33a","Type":"ContainerStarted","Data":"6b793ddafc3e324d1a59b6ea693e009b0e5754bd8e40a13543b856e287c545fb"} Jan 30 17:19:27 crc kubenswrapper[4712]: I0130 17:19:27.273647 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:27 crc kubenswrapper[4712]: I0130 17:19:27.273663 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"76a0f5cf-d830-475d-bded-4975230ef33a","Type":"ContainerStarted","Data":"25f11f80a3d5faaffa6bca65c1cd54c89f5375222f335ce561ef3f021006bcf3"} Jan 30 17:19:27 crc kubenswrapper[4712]: I0130 17:19:27.297038 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=1.297015034 podStartE2EDuration="1.297015034s" podCreationTimestamp="2026-01-30 17:19:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:19:27.292310801 +0000 UTC m=+1504.199320270" watchObservedRunningTime="2026-01-30 17:19:27.297015034 +0000 UTC m=+1504.204024503" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.112742 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211647 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-log-httpd\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211734 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-run-httpd\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211778 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-config-data\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211845 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bzvd\" (UniqueName: \"kubernetes.io/projected/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-kube-api-access-6bzvd\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211862 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-combined-ca-bundle\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211937 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-sg-core-conf-yaml\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.211993 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-scripts\") pod \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\" (UID: \"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d\") " Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.212273 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.212366 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.212529 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.217212 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-scripts" (OuterVolumeSpecName: "scripts") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.229958 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-kube-api-access-6bzvd" (OuterVolumeSpecName: "kube-api-access-6bzvd") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "kube-api-access-6bzvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.243669 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.305742 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.313852 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.313880 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.313893 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bzvd\" (UniqueName: \"kubernetes.io/projected/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-kube-api-access-6bzvd\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.313902 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.313912 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.320654 4712 generic.go:334] "Generic (PLEG): container finished" podID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerID="dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb" exitCode=0 Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.320691 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerDied","Data":"dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb"} Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.320716 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"5fcf7c9a-c181-4201-8a3a-f56cf69bd24d","Type":"ContainerDied","Data":"4baa03a1581e0efcc21cee2fdf5203d01143c27bf962dac4f3770f9c8aa5af4a"} Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.320733 4712 scope.go:117] "RemoveContainer" containerID="e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.321008 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.339301 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-config-data" (OuterVolumeSpecName: "config-data") pod "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" (UID: "5fcf7c9a-c181-4201-8a3a-f56cf69bd24d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.416183 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.440921 4712 scope.go:117] "RemoveContainer" containerID="20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.459616 4712 scope.go:117] "RemoveContainer" containerID="01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.476885 4712 scope.go:117] "RemoveContainer" containerID="dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.500863 4712 scope.go:117] "RemoveContainer" containerID="e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.501332 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e\": container with ID starting with e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e not found: ID does not exist" containerID="e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.501368 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e"} err="failed to get container status \"e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e\": rpc error: code = NotFound desc = could not find container \"e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e\": container with ID starting with e0b52067436436c405b58fb576a53bde8a4566c58edbdf209af1beeda9de4d0e not found: ID does not exist" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.501392 4712 scope.go:117] "RemoveContainer" containerID="20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.501735 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42\": container with ID starting with 20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42 not found: ID does not exist" containerID="20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.501761 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42"} err="failed to get container status \"20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42\": rpc error: code = NotFound desc = could not find container \"20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42\": container with ID starting with 20d1602f283133daf2e2719377d999b23c7c1e9c79b8c8e860be0effc1cf8e42 not found: ID does not exist" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.501779 4712 scope.go:117] "RemoveContainer" containerID="01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.502328 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5\": container with ID starting with 01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5 not found: ID does not exist" containerID="01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.502388 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5"} err="failed to get container status \"01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5\": rpc error: code = NotFound desc = could not find container \"01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5\": container with ID starting with 01ba93201e90e3f8337e7614d9e48b0cd0be5ba84185ca04e0391383726207e5 not found: ID does not exist" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.502427 4712 scope.go:117] "RemoveContainer" containerID="dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.503162 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb\": container with ID starting with dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb not found: ID does not exist" containerID="dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.503193 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb"} err="failed to get container status \"dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb\": rpc error: code = NotFound desc = could not find container \"dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb\": container with ID starting with dd7df3a78fc684e6b65ab212f333f9ae63c467545d0cc7cdc9c5e3421a0edcdb not found: ID does not exist" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.660284 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.672176 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.691728 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.692256 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="sg-core" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692277 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="sg-core" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.692294 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-notification-agent" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692303 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-notification-agent" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.692319 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-central-agent" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692335 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-central-agent" Jan 30 17:19:31 crc kubenswrapper[4712]: E0130 17:19:31.692370 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="proxy-httpd" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692378 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="proxy-httpd" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692603 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-notification-agent" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692628 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="proxy-httpd" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692643 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="ceilometer-central-agent" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.692660 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" containerName="sg-core" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.694868 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.701956 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.702386 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.709837 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.720886 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-run-httpd\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.720934 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.721142 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.721234 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-scripts\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.721285 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-config-data\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.721363 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qhn2\" (UniqueName: \"kubernetes.io/projected/74969a69-d6be-4c12-9dd0-7a529e73737d-kube-api-access-8qhn2\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.721434 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-log-httpd\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.813034 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fcf7c9a-c181-4201-8a3a-f56cf69bd24d" path="/var/lib/kubelet/pods/5fcf7c9a-c181-4201-8a3a-f56cf69bd24d/volumes" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823328 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823429 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-scripts\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823485 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-config-data\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823545 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qhn2\" (UniqueName: \"kubernetes.io/projected/74969a69-d6be-4c12-9dd0-7a529e73737d-kube-api-access-8qhn2\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823660 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-log-httpd\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823705 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-run-httpd\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.823747 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.825108 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-log-httpd\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.827693 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-run-httpd\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.830349 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.835630 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.841623 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-config-data\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.842302 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-scripts\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:31 crc kubenswrapper[4712]: I0130 17:19:31.859921 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qhn2\" (UniqueName: \"kubernetes.io/projected/74969a69-d6be-4c12-9dd0-7a529e73737d-kube-api-access-8qhn2\") pod \"ceilometer-0\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " pod="openstack/ceilometer-0" Jan 30 17:19:32 crc kubenswrapper[4712]: I0130 17:19:32.028717 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:19:32 crc kubenswrapper[4712]: I0130 17:19:32.474298 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:19:33 crc kubenswrapper[4712]: I0130 17:19:33.345480 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerStarted","Data":"f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7"} Jan 30 17:19:33 crc kubenswrapper[4712]: I0130 17:19:33.345890 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerStarted","Data":"d5081a015e2ccb071cb02e39e6d2ca9393babaa1e6920d731aafed1fbf581378"} Jan 30 17:19:34 crc kubenswrapper[4712]: I0130 17:19:34.364742 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerStarted","Data":"ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af"} Jan 30 17:19:35 crc kubenswrapper[4712]: I0130 17:19:35.378954 4712 generic.go:334] "Generic (PLEG): container finished" podID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerID="7ea359681383c8315f1de54dfb90a6308c6bf781f9821a74bce1f1dbcac99cce" exitCode=137 Jan 30 17:19:35 crc kubenswrapper[4712]: I0130 17:19:35.379025 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"7ea359681383c8315f1de54dfb90a6308c6bf781f9821a74bce1f1dbcac99cce"} Jan 30 17:19:35 crc kubenswrapper[4712]: I0130 17:19:35.380241 4712 scope.go:117] "RemoveContainer" containerID="8b23f706dbf8aa6538b8c9a023bfa2c07b9d28b0f58e8e9342cd27572ba0c0d2" Jan 30 17:19:35 crc kubenswrapper[4712]: I0130 17:19:35.382832 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerStarted","Data":"966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185"} Jan 30 17:19:35 crc kubenswrapper[4712]: I0130 17:19:35.761191 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:35 crc kubenswrapper[4712]: I0130 17:19:35.879928 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.394479 4712 generic.go:334] "Generic (PLEG): container finished" podID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerID="81106e51e98ee42b57283673e3cf02537243b70df68ffb3d9849db1d90c861a3" exitCode=137 Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.394537 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerDied","Data":"81106e51e98ee42b57283673e3cf02537243b70df68ffb3d9849db1d90c861a3"} Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.394910 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"03c2090f070ef32f4daf04d3eeaf131ceca5e16369eb0275c32dfc9aaf604b1e"} Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.394947 4712 scope.go:117] "RemoveContainer" containerID="9af3d0805e3d6c8144d5e8f4ca5198b954ee80a23bb8c7ac20dd1a8994edf213" Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.398442 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"33da2560c2b92663910c7a5cee80606f93009c5b03eae1dcf70e4946299645fb"} Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.466599 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 30 17:19:36 crc kubenswrapper[4712]: I0130 17:19:36.566489 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-449vr"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.094689 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-x27hm"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.096087 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.098683 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.098925 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.109551 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x27hm"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.139643 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.139751 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzwp\" (UniqueName: \"kubernetes.io/projected/f564ed01-d852-40b5-853f-f79a37a114dc-kube-api-access-fkzwp\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.139808 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-scripts\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.139840 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-config-data\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.241314 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-scripts\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.241371 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-config-data\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.241431 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.241519 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzwp\" (UniqueName: \"kubernetes.io/projected/f564ed01-d852-40b5-853f-f79a37a114dc-kube-api-access-fkzwp\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.248245 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-scripts\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.264486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.264997 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-config-data\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.288323 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzwp\" (UniqueName: \"kubernetes.io/projected/f564ed01-d852-40b5-853f-f79a37a114dc-kube-api-access-fkzwp\") pod \"nova-cell0-cell-mapping-x27hm\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.411042 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-449vr" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="registry-server" containerID="cri-o://269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9" gracePeriod=2 Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.416779 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.430643 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.462380 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.462509 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.482149 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.487752 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.487967 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.515833 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.556985 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-config-data\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.557083 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.557130 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.557162 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4vd9\" (UniqueName: \"kubernetes.io/projected/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-kube-api-access-n4vd9\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.557214 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/109193de-e767-47ce-a608-e0420d2d7a40-logs\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.557247 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6l8g\" (UniqueName: \"kubernetes.io/projected/109193de-e767-47ce-a608-e0420d2d7a40-kube-api-access-w6l8g\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.557279 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.587881 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.661000 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.661905 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.661974 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4vd9\" (UniqueName: \"kubernetes.io/projected/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-kube-api-access-n4vd9\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.662044 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/109193de-e767-47ce-a608-e0420d2d7a40-logs\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.662074 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6l8g\" (UniqueName: \"kubernetes.io/projected/109193de-e767-47ce-a608-e0420d2d7a40-kube-api-access-w6l8g\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.662102 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.662412 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-config-data\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.669338 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/109193de-e767-47ce-a608-e0420d2d7a40-logs\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.669381 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.675312 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.677134 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.682562 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.685969 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-config-data\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.695027 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.704854 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.709638 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.729735 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.747477 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6l8g\" (UniqueName: \"kubernetes.io/projected/109193de-e767-47ce-a608-e0420d2d7a40-kube-api-access-w6l8g\") pod \"nova-api-0\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.750138 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.757556 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4vd9\" (UniqueName: \"kubernetes.io/projected/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-kube-api-access-n4vd9\") pod \"nova-cell1-novncproxy-0\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.758764 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764197 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a352b7-2d57-417e-817f-391880103e98-logs\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764236 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhzg\" (UniqueName: \"kubernetes.io/projected/61452352-342c-4cba-8489-13d8a26ba14b-kube-api-access-nvhzg\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764307 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dj2zr\" (UniqueName: \"kubernetes.io/projected/d9a352b7-2d57-417e-817f-391880103e98-kube-api-access-dj2zr\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764342 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-config-data\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764370 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-config-data\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764404 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.764437 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.791858 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865473 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865559 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a352b7-2d57-417e-817f-391880103e98-logs\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865584 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvhzg\" (UniqueName: \"kubernetes.io/projected/61452352-342c-4cba-8489-13d8a26ba14b-kube-api-access-nvhzg\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865635 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dj2zr\" (UniqueName: \"kubernetes.io/projected/d9a352b7-2d57-417e-817f-391880103e98-kube-api-access-dj2zr\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865662 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-config-data\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865682 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-config-data\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.865708 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.871429 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a352b7-2d57-417e-817f-391880103e98-logs\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.878228 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.888499 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-config-data\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.889116 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.895743 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-config-data\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.905313 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.924825 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.932515 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dj2zr\" (UniqueName: \"kubernetes.io/projected/d9a352b7-2d57-417e-817f-391880103e98-kube-api-access-dj2zr\") pod \"nova-metadata-0\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " pod="openstack/nova-metadata-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.943907 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7b7cv"] Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.956685 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.964474 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvhzg\" (UniqueName: \"kubernetes.io/projected/61452352-342c-4cba-8489-13d8a26ba14b-kube-api-access-nvhzg\") pod \"nova-scheduler-0\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " pod="openstack/nova-scheduler-0" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.967443 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.969297 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.969862 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-config\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.970912 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.971182 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.971397 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nsds\" (UniqueName: \"kubernetes.io/projected/6964fb1d-a7f1-4719-a748-14639d6a771c-kube-api-access-2nsds\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:37 crc kubenswrapper[4712]: I0130 17:19:37.972387 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7b7cv"] Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.076648 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.078755 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nsds\" (UniqueName: \"kubernetes.io/projected/6964fb1d-a7f1-4719-a748-14639d6a771c-kube-api-access-2nsds\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.078836 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.078880 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.078901 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-config\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.078918 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.078978 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.080075 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-svc\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.080500 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-nb\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.080600 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-config\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.081419 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-swift-storage-0\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.081975 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-sb\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.105067 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.108706 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nsds\" (UniqueName: \"kubernetes.io/projected/6964fb1d-a7f1-4719-a748-14639d6a771c-kube-api-access-2nsds\") pod \"dnsmasq-dns-9b86998b5-7b7cv\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.288718 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.362247 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.392442 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-utilities\") pod \"92c3acde-acd8-4e20-ac11-5383b83fe945\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.392736 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-catalog-content\") pod \"92c3acde-acd8-4e20-ac11-5383b83fe945\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.392829 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxh6\" (UniqueName: \"kubernetes.io/projected/92c3acde-acd8-4e20-ac11-5383b83fe945-kube-api-access-xnxh6\") pod \"92c3acde-acd8-4e20-ac11-5383b83fe945\" (UID: \"92c3acde-acd8-4e20-ac11-5383b83fe945\") " Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.395537 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-utilities" (OuterVolumeSpecName: "utilities") pod "92c3acde-acd8-4e20-ac11-5383b83fe945" (UID: "92c3acde-acd8-4e20-ac11-5383b83fe945"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.407377 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c3acde-acd8-4e20-ac11-5383b83fe945-kube-api-access-xnxh6" (OuterVolumeSpecName: "kube-api-access-xnxh6") pod "92c3acde-acd8-4e20-ac11-5383b83fe945" (UID: "92c3acde-acd8-4e20-ac11-5383b83fe945"). InnerVolumeSpecName "kube-api-access-xnxh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.487110 4712 generic.go:334] "Generic (PLEG): container finished" podID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerID="269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9" exitCode=0 Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.487167 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerDied","Data":"269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9"} Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.487193 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-449vr" event={"ID":"92c3acde-acd8-4e20-ac11-5383b83fe945","Type":"ContainerDied","Data":"d40d91455dfb746ef2679f8279c259c8c31495dff9639df7da71e02e4f5d3f4a"} Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.487210 4712 scope.go:117] "RemoveContainer" containerID="269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.487322 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-449vr" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.495164 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.495381 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xnxh6\" (UniqueName: \"kubernetes.io/projected/92c3acde-acd8-4e20-ac11-5383b83fe945-kube-api-access-xnxh6\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.506585 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-x27hm"] Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.637719 4712 scope.go:117] "RemoveContainer" containerID="d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.800131 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "92c3acde-acd8-4e20-ac11-5383b83fe945" (UID: "92c3acde-acd8-4e20-ac11-5383b83fe945"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.801998 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/92c3acde-acd8-4e20-ac11-5383b83fe945-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:38 crc kubenswrapper[4712]: I0130 17:19:38.872987 4712 scope.go:117] "RemoveContainer" containerID="650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.047585 4712 scope.go:117] "RemoveContainer" containerID="269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.048225 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:19:39 crc kubenswrapper[4712]: E0130 17:19:39.068388 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9\": container with ID starting with 269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9 not found: ID does not exist" containerID="269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.068433 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9"} err="failed to get container status \"269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9\": rpc error: code = NotFound desc = could not find container \"269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9\": container with ID starting with 269a14b461bfd1667f09af21e78ad8c2e1f32059864ce9bc861b02f84fb317e9 not found: ID does not exist" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.068473 4712 scope.go:117] "RemoveContainer" containerID="d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719" Jan 30 17:19:39 crc kubenswrapper[4712]: E0130 17:19:39.068936 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719\": container with ID starting with d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719 not found: ID does not exist" containerID="d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.068958 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719"} err="failed to get container status \"d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719\": rpc error: code = NotFound desc = could not find container \"d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719\": container with ID starting with d9d7652389f1e79f8ffd10ed00af3a3cdf07d8021aba740623cea3e83c88a719 not found: ID does not exist" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.068976 4712 scope.go:117] "RemoveContainer" containerID="650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c" Jan 30 17:19:39 crc kubenswrapper[4712]: E0130 17:19:39.069217 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c\": container with ID starting with 650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c not found: ID does not exist" containerID="650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.069240 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c"} err="failed to get container status \"650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c\": rpc error: code = NotFound desc = could not find container \"650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c\": container with ID starting with 650b84eaccbf192d095e6589556511dfbdc3517713cedc7044c35fe01790d10c not found: ID does not exist" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.063788 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.176943 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-449vr"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.201422 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-449vr"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.344454 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.527518 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x27hm" event={"ID":"f564ed01-d852-40b5-853f-f79a37a114dc","Type":"ContainerStarted","Data":"00765a0091717570fdc4d176c373c4eca61c407007da373392d6a2c9630ac64b"} Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.527570 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x27hm" event={"ID":"f564ed01-d852-40b5-853f-f79a37a114dc","Type":"ContainerStarted","Data":"cb76fc6d00ef597daaf13b08783f8a384e27a92e67d2dbd80cf90df86de3721b"} Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.534783 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a352b7-2d57-417e-817f-391880103e98","Type":"ContainerStarted","Data":"e3c07b84dbcb104d42002b45f7a611b7078671d33d2356eb71766b54e24a3156"} Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.536345 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"109193de-e767-47ce-a608-e0420d2d7a40","Type":"ContainerStarted","Data":"d82e3a90c087dc37ded8d8846a8a679793f2a8f21cd200eb0f87ff5a59af0dad"} Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.549581 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4f33da2-dc23-40f0-8a42-d9f557f63a5f","Type":"ContainerStarted","Data":"d8f6091436d49830e03cd0410a7bcd2b22e3fe6dc0ad430bf86fdf7d795b3970"} Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.554106 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-x27hm" podStartSLOduration=2.554086806 podStartE2EDuration="2.554086806s" podCreationTimestamp="2026-01-30 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:19:39.55013124 +0000 UTC m=+1516.457140709" watchObservedRunningTime="2026-01-30 17:19:39.554086806 +0000 UTC m=+1516.461096275" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.575152 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerStarted","Data":"5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c"} Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.575948 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.646215 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.062707778 podStartE2EDuration="8.646195153s" podCreationTimestamp="2026-01-30 17:19:31 +0000 UTC" firstStartedPulling="2026-01-30 17:19:32.49057152 +0000 UTC m=+1509.397580989" lastFinishedPulling="2026-01-30 17:19:38.074058895 +0000 UTC m=+1514.981068364" observedRunningTime="2026-01-30 17:19:39.622754307 +0000 UTC m=+1516.529763796" watchObservedRunningTime="2026-01-30 17:19:39.646195153 +0000 UTC m=+1516.553204622" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.646824 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.686754 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7b7cv"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.834378 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" path="/var/lib/kubelet/pods/92c3acde-acd8-4e20-ac11-5383b83fe945/volumes" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.883670 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwz8"] Jan 30 17:19:39 crc kubenswrapper[4712]: E0130 17:19:39.884202 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="extract-content" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.884221 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="extract-content" Jan 30 17:19:39 crc kubenswrapper[4712]: E0130 17:19:39.884237 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="registry-server" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.884244 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="registry-server" Jan 30 17:19:39 crc kubenswrapper[4712]: E0130 17:19:39.884307 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="extract-utilities" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.884316 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="extract-utilities" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.884518 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c3acde-acd8-4e20-ac11-5383b83fe945" containerName="registry-server" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.885183 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.887693 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.888144 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.928840 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwz8"] Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.979506 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-config-data\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.979674 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-scripts\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.979716 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:39 crc kubenswrapper[4712]: I0130 17:19:39.979836 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-skfsr\" (UniqueName: \"kubernetes.io/projected/a3709de8-50e0-480b-a152-ee1875e8ff4f-kube-api-access-skfsr\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.083227 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-scripts\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.083291 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.083401 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-skfsr\" (UniqueName: \"kubernetes.io/projected/a3709de8-50e0-480b-a152-ee1875e8ff4f-kube-api-access-skfsr\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.083504 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-config-data\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.090657 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.097398 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-scripts\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.108861 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-config-data\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.115331 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-skfsr\" (UniqueName: \"kubernetes.io/projected/a3709de8-50e0-480b-a152-ee1875e8ff4f-kube-api-access-skfsr\") pod \"nova-cell1-conductor-db-sync-5cwz8\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.249349 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.603223 4712 generic.go:334] "Generic (PLEG): container finished" podID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerID="0283d97ca21689294cdb94dc91cf892fb5e87038cfc4771591e165d9e33aaff3" exitCode=0 Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.603297 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" event={"ID":"6964fb1d-a7f1-4719-a748-14639d6a771c","Type":"ContainerDied","Data":"0283d97ca21689294cdb94dc91cf892fb5e87038cfc4771591e165d9e33aaff3"} Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.603340 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" event={"ID":"6964fb1d-a7f1-4719-a748-14639d6a771c","Type":"ContainerStarted","Data":"0dad728da033c2ffbfea298eb5befbf47574bc0baff04df2cd839ef8a5060cd7"} Jan 30 17:19:40 crc kubenswrapper[4712]: I0130 17:19:40.614283 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61452352-342c-4cba-8489-13d8a26ba14b","Type":"ContainerStarted","Data":"5913a9fa17896fe52affca089db206e8d32584614dafb93f55340b0993edbc68"} Jan 30 17:19:41 crc kubenswrapper[4712]: I0130 17:19:41.113009 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwz8"] Jan 30 17:19:41 crc kubenswrapper[4712]: I0130 17:19:41.655111 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" event={"ID":"6964fb1d-a7f1-4719-a748-14639d6a771c","Type":"ContainerStarted","Data":"04defd45460f80104ff8b937c03637087d21d2c8420a9aead154b75962cc56d8"} Jan 30 17:19:41 crc kubenswrapper[4712]: I0130 17:19:41.655639 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:41 crc kubenswrapper[4712]: I0130 17:19:41.672602 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" event={"ID":"a3709de8-50e0-480b-a152-ee1875e8ff4f","Type":"ContainerStarted","Data":"e71c4b093740466028b1e13c251aa69e9018237930891bb1b539769d945002f8"} Jan 30 17:19:41 crc kubenswrapper[4712]: I0130 17:19:41.686939 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" podStartSLOduration=4.686923854 podStartE2EDuration="4.686923854s" podCreationTimestamp="2026-01-30 17:19:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:19:41.685584271 +0000 UTC m=+1518.592593740" watchObservedRunningTime="2026-01-30 17:19:41.686923854 +0000 UTC m=+1518.593933323" Jan 30 17:19:41 crc kubenswrapper[4712]: I0130 17:19:41.936565 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:42 crc kubenswrapper[4712]: I0130 17:19:42.005733 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:19:42 crc kubenswrapper[4712]: I0130 17:19:42.691417 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" event={"ID":"a3709de8-50e0-480b-a152-ee1875e8ff4f","Type":"ContainerStarted","Data":"78c4784ad9e9faa1515784fcb4af25f8615ff854c01fa1f7e97e5a378b0ed106"} Jan 30 17:19:42 crc kubenswrapper[4712]: I0130 17:19:42.713465 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" podStartSLOduration=3.713446528 podStartE2EDuration="3.713446528s" podCreationTimestamp="2026-01-30 17:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:19:42.710739952 +0000 UTC m=+1519.617749421" watchObservedRunningTime="2026-01-30 17:19:42.713446528 +0000 UTC m=+1519.620455997" Jan 30 17:19:45 crc kubenswrapper[4712]: I0130 17:19:45.072999 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:19:45 crc kubenswrapper[4712]: I0130 17:19:45.073553 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:19:45 crc kubenswrapper[4712]: I0130 17:19:45.075202 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:19:45 crc kubenswrapper[4712]: I0130 17:19:45.352805 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:19:45 crc kubenswrapper[4712]: I0130 17:19:45.352861 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.788435 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"109193de-e767-47ce-a608-e0420d2d7a40","Type":"ContainerStarted","Data":"f93958dc3ffde5076a74e8d6438d320d844c685a6c3ccee66ea915ebaf59be6b"} Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.794815 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4f33da2-dc23-40f0-8a42-d9f557f63a5f","Type":"ContainerStarted","Data":"a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55"} Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.794946 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="f4f33da2-dc23-40f0-8a42-d9f557f63a5f" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55" gracePeriod=30 Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.804479 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61452352-342c-4cba-8489-13d8a26ba14b","Type":"ContainerStarted","Data":"3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34"} Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.811658 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a352b7-2d57-417e-817f-391880103e98","Type":"ContainerStarted","Data":"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5"} Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.827616 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.704924402 podStartE2EDuration="9.827590269s" podCreationTimestamp="2026-01-30 17:19:37 +0000 UTC" firstStartedPulling="2026-01-30 17:19:39.141848866 +0000 UTC m=+1516.048858335" lastFinishedPulling="2026-01-30 17:19:46.264514733 +0000 UTC m=+1523.171524202" observedRunningTime="2026-01-30 17:19:46.816744858 +0000 UTC m=+1523.723754327" watchObservedRunningTime="2026-01-30 17:19:46.827590269 +0000 UTC m=+1523.734599738" Jan 30 17:19:46 crc kubenswrapper[4712]: I0130 17:19:46.842809 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.240796192 podStartE2EDuration="9.842778727s" podCreationTimestamp="2026-01-30 17:19:37 +0000 UTC" firstStartedPulling="2026-01-30 17:19:39.664580858 +0000 UTC m=+1516.571590327" lastFinishedPulling="2026-01-30 17:19:46.266563403 +0000 UTC m=+1523.173572862" observedRunningTime="2026-01-30 17:19:46.836846954 +0000 UTC m=+1523.743856453" watchObservedRunningTime="2026-01-30 17:19:46.842778727 +0000 UTC m=+1523.749788196" Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.842269 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"109193de-e767-47ce-a608-e0420d2d7a40","Type":"ContainerStarted","Data":"da940598861ab69bd38ec701a31da285229381878517fefd5bf162c99cde6cbe"} Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.850452 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-log" containerID="cri-o://5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5" gracePeriod=30 Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.850683 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-metadata" containerID="cri-o://26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a" gracePeriod=30 Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.850897 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a352b7-2d57-417e-817f-391880103e98","Type":"ContainerStarted","Data":"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a"} Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.870684 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.714814054 podStartE2EDuration="10.870640253s" podCreationTimestamp="2026-01-30 17:19:37 +0000 UTC" firstStartedPulling="2026-01-30 17:19:39.196684192 +0000 UTC m=+1516.103693661" lastFinishedPulling="2026-01-30 17:19:46.352510391 +0000 UTC m=+1523.259519860" observedRunningTime="2026-01-30 17:19:47.86637522 +0000 UTC m=+1524.773384709" watchObservedRunningTime="2026-01-30 17:19:47.870640253 +0000 UTC m=+1524.777649722" Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.879312 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.879533 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.899172 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.981474244 podStartE2EDuration="10.899157644s" podCreationTimestamp="2026-01-30 17:19:37 +0000 UTC" firstStartedPulling="2026-01-30 17:19:39.378703864 +0000 UTC m=+1516.285713333" lastFinishedPulling="2026-01-30 17:19:46.296387254 +0000 UTC m=+1523.203396733" observedRunningTime="2026-01-30 17:19:47.89447395 +0000 UTC m=+1524.801483419" watchObservedRunningTime="2026-01-30 17:19:47.899157644 +0000 UTC m=+1524.806167113" Jan 30 17:19:47 crc kubenswrapper[4712]: I0130 17:19:47.905971 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.077640 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.077913 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.106320 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.106371 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.168744 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.290954 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.405560 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-2fsl2"] Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.405866 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerName="dnsmasq-dns" containerID="cri-o://db7d4354619efe82d62cedb1e6502be85189d0715b9036669b99393e2d070b8c" gracePeriod=10 Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.807663 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.892068 4712 generic.go:334] "Generic (PLEG): container finished" podID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerID="db7d4354619efe82d62cedb1e6502be85189d0715b9036669b99393e2d070b8c" exitCode=0 Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.892154 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" event={"ID":"7e92eef8-fc7a-4b92-8a68-95d37b647aa4","Type":"ContainerDied","Data":"db7d4354619efe82d62cedb1e6502be85189d0715b9036669b99393e2d070b8c"} Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.922075 4712 generic.go:334] "Generic (PLEG): container finished" podID="d9a352b7-2d57-417e-817f-391880103e98" containerID="26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a" exitCode=0 Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.922116 4712 generic.go:334] "Generic (PLEG): container finished" podID="d9a352b7-2d57-417e-817f-391880103e98" containerID="5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5" exitCode=143 Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.923063 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.923522 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a352b7-2d57-417e-817f-391880103e98","Type":"ContainerDied","Data":"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a"} Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.923554 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a352b7-2d57-417e-817f-391880103e98","Type":"ContainerDied","Data":"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5"} Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.923565 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d9a352b7-2d57-417e-817f-391880103e98","Type":"ContainerDied","Data":"e3c07b84dbcb104d42002b45f7a611b7078671d33d2356eb71766b54e24a3156"} Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.923579 4712 scope.go:117] "RemoveContainer" containerID="26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.927006 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a352b7-2d57-417e-817f-391880103e98-logs\") pod \"d9a352b7-2d57-417e-817f-391880103e98\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.927135 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dj2zr\" (UniqueName: \"kubernetes.io/projected/d9a352b7-2d57-417e-817f-391880103e98-kube-api-access-dj2zr\") pod \"d9a352b7-2d57-417e-817f-391880103e98\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.927258 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle\") pod \"d9a352b7-2d57-417e-817f-391880103e98\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.927499 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-config-data\") pod \"d9a352b7-2d57-417e-817f-391880103e98\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.927808 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9a352b7-2d57-417e-817f-391880103e98-logs" (OuterVolumeSpecName: "logs") pod "d9a352b7-2d57-417e-817f-391880103e98" (UID: "d9a352b7-2d57-417e-817f-391880103e98"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.938087 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d9a352b7-2d57-417e-817f-391880103e98-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.969014 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.969028 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.206:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:19:48 crc kubenswrapper[4712]: I0130 17:19:48.978154 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9a352b7-2d57-417e-817f-391880103e98-kube-api-access-dj2zr" (OuterVolumeSpecName: "kube-api-access-dj2zr") pod "d9a352b7-2d57-417e-817f-391880103e98" (UID: "d9a352b7-2d57-417e-817f-391880103e98"). InnerVolumeSpecName "kube-api-access-dj2zr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.063767 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9a352b7-2d57-417e-817f-391880103e98" (UID: "d9a352b7-2d57-417e-817f-391880103e98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.064190 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle\") pod \"d9a352b7-2d57-417e-817f-391880103e98\" (UID: \"d9a352b7-2d57-417e-817f-391880103e98\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.064844 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dj2zr\" (UniqueName: \"kubernetes.io/projected/d9a352b7-2d57-417e-817f-391880103e98-kube-api-access-dj2zr\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: W0130 17:19:49.064940 4712 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/d9a352b7-2d57-417e-817f-391880103e98/volumes/kubernetes.io~secret/combined-ca-bundle Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.064959 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d9a352b7-2d57-417e-817f-391880103e98" (UID: "d9a352b7-2d57-417e-817f-391880103e98"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.112370 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-config-data" (OuterVolumeSpecName: "config-data") pod "d9a352b7-2d57-417e-817f-391880103e98" (UID: "d9a352b7-2d57-417e-817f-391880103e98"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.114180 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.154948 4712 scope.go:117] "RemoveContainer" containerID="5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.166155 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.166225 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9a352b7-2d57-417e-817f-391880103e98-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.198911 4712 scope.go:117] "RemoveContainer" containerID="26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a" Jan 30 17:19:49 crc kubenswrapper[4712]: E0130 17:19:49.218952 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a\": container with ID starting with 26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a not found: ID does not exist" containerID="26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.218996 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a"} err="failed to get container status \"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a\": rpc error: code = NotFound desc = could not find container \"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a\": container with ID starting with 26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a not found: ID does not exist" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.219022 4712 scope.go:117] "RemoveContainer" containerID="5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5" Jan 30 17:19:49 crc kubenswrapper[4712]: E0130 17:19:49.225274 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5\": container with ID starting with 5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5 not found: ID does not exist" containerID="5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.225327 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5"} err="failed to get container status \"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5\": rpc error: code = NotFound desc = could not find container \"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5\": container with ID starting with 5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5 not found: ID does not exist" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.225352 4712 scope.go:117] "RemoveContainer" containerID="26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.226193 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a"} err="failed to get container status \"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a\": rpc error: code = NotFound desc = could not find container \"26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a\": container with ID starting with 26c8510808f030f86e0939e9519b06762ee897f457bcb5a9cfaf5af7250afd8a not found: ID does not exist" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.226228 4712 scope.go:117] "RemoveContainer" containerID="5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.226570 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5"} err="failed to get container status \"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5\": rpc error: code = NotFound desc = could not find container \"5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5\": container with ID starting with 5d91a3ba3bc3265e46217bfaa257ebda47a2aad88cbb7ad16ea7ae91aef62df5 not found: ID does not exist" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.280259 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.302033 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.307970 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.324878 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:49 crc kubenswrapper[4712]: E0130 17:19:49.325387 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-log" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325414 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-log" Jan 30 17:19:49 crc kubenswrapper[4712]: E0130 17:19:49.325433 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerName="init" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325441 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerName="init" Jan 30 17:19:49 crc kubenswrapper[4712]: E0130 17:19:49.325466 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerName="dnsmasq-dns" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325473 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerName="dnsmasq-dns" Jan 30 17:19:49 crc kubenswrapper[4712]: E0130 17:19:49.325511 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-metadata" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325521 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-metadata" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325742 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-metadata" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325768 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9a352b7-2d57-417e-817f-391880103e98" containerName="nova-metadata-log" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.325786 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" containerName="dnsmasq-dns" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.326840 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.330279 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.330444 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.365066 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.478869 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-config\") pod \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479200 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-swift-storage-0\") pod \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479254 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-sb\") pod \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479282 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqsqd\" (UniqueName: \"kubernetes.io/projected/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-kube-api-access-pqsqd\") pod \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479304 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-nb\") pod \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479346 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-svc\") pod \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\" (UID: \"7e92eef8-fc7a-4b92-8a68-95d37b647aa4\") " Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479592 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcqrb\" (UniqueName: \"kubernetes.io/projected/f6843e22-e667-40c6-8d0e-e2484fcdff9a-kube-api-access-rcqrb\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479622 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479692 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-config-data\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479738 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.479773 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6843e22-e667-40c6-8d0e-e2484fcdff9a-logs\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.504000 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-kube-api-access-pqsqd" (OuterVolumeSpecName: "kube-api-access-pqsqd") pod "7e92eef8-fc7a-4b92-8a68-95d37b647aa4" (UID: "7e92eef8-fc7a-4b92-8a68-95d37b647aa4"). InnerVolumeSpecName "kube-api-access-pqsqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.581980 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcqrb\" (UniqueName: \"kubernetes.io/projected/f6843e22-e667-40c6-8d0e-e2484fcdff9a-kube-api-access-rcqrb\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.582031 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.582101 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-config-data\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.582146 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.582180 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6843e22-e667-40c6-8d0e-e2484fcdff9a-logs\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.582533 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6843e22-e667-40c6-8d0e-e2484fcdff9a-logs\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.582601 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqsqd\" (UniqueName: \"kubernetes.io/projected/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-kube-api-access-pqsqd\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.599472 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7e92eef8-fc7a-4b92-8a68-95d37b647aa4" (UID: "7e92eef8-fc7a-4b92-8a68-95d37b647aa4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.607291 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcqrb\" (UniqueName: \"kubernetes.io/projected/f6843e22-e667-40c6-8d0e-e2484fcdff9a-kube-api-access-rcqrb\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.607407 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.614855 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.632035 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7e92eef8-fc7a-4b92-8a68-95d37b647aa4" (UID: "7e92eef8-fc7a-4b92-8a68-95d37b647aa4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.632317 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-config-data\") pod \"nova-metadata-0\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.654553 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-config" (OuterVolumeSpecName: "config") pod "7e92eef8-fc7a-4b92-8a68-95d37b647aa4" (UID: "7e92eef8-fc7a-4b92-8a68-95d37b647aa4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.667368 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7e92eef8-fc7a-4b92-8a68-95d37b647aa4" (UID: "7e92eef8-fc7a-4b92-8a68-95d37b647aa4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.677783 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.683931 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.683969 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.683982 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.683991 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.698020 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7e92eef8-fc7a-4b92-8a68-95d37b647aa4" (UID: "7e92eef8-fc7a-4b92-8a68-95d37b647aa4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.794410 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7e92eef8-fc7a-4b92-8a68-95d37b647aa4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.834864 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9a352b7-2d57-417e-817f-391880103e98" path="/var/lib/kubelet/pods/d9a352b7-2d57-417e-817f-391880103e98/volumes" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.957681 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" event={"ID":"7e92eef8-fc7a-4b92-8a68-95d37b647aa4","Type":"ContainerDied","Data":"c62319af760d74be096684984ea62d2aae63effcf06380ed22977c7640d3b728"} Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.957742 4712 scope.go:117] "RemoveContainer" containerID="db7d4354619efe82d62cedb1e6502be85189d0715b9036669b99393e2d070b8c" Jan 30 17:19:49 crc kubenswrapper[4712]: I0130 17:19:49.957907 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7756b9d78c-2fsl2" Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.002369 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-2fsl2"] Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.028725 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7756b9d78c-2fsl2"] Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.029035 4712 scope.go:117] "RemoveContainer" containerID="55f5e38662d9207fd042d24ee573ecd40ff09380de4b15a5e29ff9541f1211a0" Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.057686 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.984218 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6843e22-e667-40c6-8d0e-e2484fcdff9a","Type":"ContainerStarted","Data":"1a5be2447c5f8e1871cf33c0deb7f7fc3497f0421a20f7cbeff5ae9e2110d390"} Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.984551 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6843e22-e667-40c6-8d0e-e2484fcdff9a","Type":"ContainerStarted","Data":"06768615210ba6bfdb413cfc295ef737195ef11adc88154c988a7f12d5436a05"} Jan 30 17:19:50 crc kubenswrapper[4712]: I0130 17:19:50.984561 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6843e22-e667-40c6-8d0e-e2484fcdff9a","Type":"ContainerStarted","Data":"58775ec4f5227c57e75b0868f461bf534e494994d2c723c449e1bf16aef237c4"} Jan 30 17:19:51 crc kubenswrapper[4712]: I0130 17:19:51.026374 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.026352898 podStartE2EDuration="2.026352898s" podCreationTimestamp="2026-01-30 17:19:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:19:51.014927372 +0000 UTC m=+1527.921936841" watchObservedRunningTime="2026-01-30 17:19:51.026352898 +0000 UTC m=+1527.933362367" Jan 30 17:19:51 crc kubenswrapper[4712]: I0130 17:19:51.814075 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e92eef8-fc7a-4b92-8a68-95d37b647aa4" path="/var/lib/kubelet/pods/7e92eef8-fc7a-4b92-8a68-95d37b647aa4/volumes" Jan 30 17:19:53 crc kubenswrapper[4712]: E0130 17:19:53.943039 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf564ed01_d852_40b5_853f_f79a37a114dc.slice/crio-00765a0091717570fdc4d176c373c4eca61c407007da373392d6a2c9630ac64b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf564ed01_d852_40b5_853f_f79a37a114dc.slice/crio-conmon-00765a0091717570fdc4d176c373c4eca61c407007da373392d6a2c9630ac64b.scope\": RecentStats: unable to find data in memory cache]" Jan 30 17:19:54 crc kubenswrapper[4712]: I0130 17:19:54.026549 4712 generic.go:334] "Generic (PLEG): container finished" podID="f564ed01-d852-40b5-853f-f79a37a114dc" containerID="00765a0091717570fdc4d176c373c4eca61c407007da373392d6a2c9630ac64b" exitCode=0 Jan 30 17:19:54 crc kubenswrapper[4712]: I0130 17:19:54.026611 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x27hm" event={"ID":"f564ed01-d852-40b5-853f-f79a37a114dc","Type":"ContainerDied","Data":"00765a0091717570fdc4d176c373c4eca61c407007da373392d6a2c9630ac64b"} Jan 30 17:19:54 crc kubenswrapper[4712]: I0130 17:19:54.679163 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:19:54 crc kubenswrapper[4712]: I0130 17:19:54.679525 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.073672 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.356687 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.490692 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.625176 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-scripts\") pod \"f564ed01-d852-40b5-853f-f79a37a114dc\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.625493 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkzwp\" (UniqueName: \"kubernetes.io/projected/f564ed01-d852-40b5-853f-f79a37a114dc-kube-api-access-fkzwp\") pod \"f564ed01-d852-40b5-853f-f79a37a114dc\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.625583 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-config-data\") pod \"f564ed01-d852-40b5-853f-f79a37a114dc\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.625611 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-combined-ca-bundle\") pod \"f564ed01-d852-40b5-853f-f79a37a114dc\" (UID: \"f564ed01-d852-40b5-853f-f79a37a114dc\") " Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.631767 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-scripts" (OuterVolumeSpecName: "scripts") pod "f564ed01-d852-40b5-853f-f79a37a114dc" (UID: "f564ed01-d852-40b5-853f-f79a37a114dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.644674 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f564ed01-d852-40b5-853f-f79a37a114dc-kube-api-access-fkzwp" (OuterVolumeSpecName: "kube-api-access-fkzwp") pod "f564ed01-d852-40b5-853f-f79a37a114dc" (UID: "f564ed01-d852-40b5-853f-f79a37a114dc"). InnerVolumeSpecName "kube-api-access-fkzwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.657674 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f564ed01-d852-40b5-853f-f79a37a114dc" (UID: "f564ed01-d852-40b5-853f-f79a37a114dc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.663419 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-config-data" (OuterVolumeSpecName: "config-data") pod "f564ed01-d852-40b5-853f-f79a37a114dc" (UID: "f564ed01-d852-40b5-853f-f79a37a114dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.728617 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.728908 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkzwp\" (UniqueName: \"kubernetes.io/projected/f564ed01-d852-40b5-853f-f79a37a114dc-kube-api-access-fkzwp\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.728921 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:55 crc kubenswrapper[4712]: I0130 17:19:55.728933 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f564ed01-d852-40b5-853f-f79a37a114dc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.047583 4712 generic.go:334] "Generic (PLEG): container finished" podID="a3709de8-50e0-480b-a152-ee1875e8ff4f" containerID="78c4784ad9e9faa1515784fcb4af25f8615ff854c01fa1f7e97e5a378b0ed106" exitCode=0 Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.047652 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" event={"ID":"a3709de8-50e0-480b-a152-ee1875e8ff4f","Type":"ContainerDied","Data":"78c4784ad9e9faa1515784fcb4af25f8615ff854c01fa1f7e97e5a378b0ed106"} Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.050123 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-x27hm" event={"ID":"f564ed01-d852-40b5-853f-f79a37a114dc","Type":"ContainerDied","Data":"cb76fc6d00ef597daaf13b08783f8a384e27a92e67d2dbd80cf90df86de3721b"} Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.050161 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb76fc6d00ef597daaf13b08783f8a384e27a92e67d2dbd80cf90df86de3721b" Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.050229 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-x27hm" Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.245970 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.246361 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-api" containerID="cri-o://da940598861ab69bd38ec701a31da285229381878517fefd5bf162c99cde6cbe" gracePeriod=30 Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.246318 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-log" containerID="cri-o://f93958dc3ffde5076a74e8d6438d320d844c685a6c3ccee66ea915ebaf59be6b" gracePeriod=30 Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.292974 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.293191 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="61452352-342c-4cba-8489-13d8a26ba14b" containerName="nova-scheduler-scheduler" containerID="cri-o://3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34" gracePeriod=30 Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.344202 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.344695 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-log" containerID="cri-o://06768615210ba6bfdb413cfc295ef737195ef11adc88154c988a7f12d5436a05" gracePeriod=30 Jan 30 17:19:56 crc kubenswrapper[4712]: I0130 17:19:56.345114 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-metadata" containerID="cri-o://1a5be2447c5f8e1871cf33c0deb7f7fc3497f0421a20f7cbeff5ae9e2110d390" gracePeriod=30 Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.062973 4712 generic.go:334] "Generic (PLEG): container finished" podID="109193de-e767-47ce-a608-e0420d2d7a40" containerID="f93958dc3ffde5076a74e8d6438d320d844c685a6c3ccee66ea915ebaf59be6b" exitCode=143 Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.063058 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"109193de-e767-47ce-a608-e0420d2d7a40","Type":"ContainerDied","Data":"f93958dc3ffde5076a74e8d6438d320d844c685a6c3ccee66ea915ebaf59be6b"} Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.067604 4712 generic.go:334] "Generic (PLEG): container finished" podID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerID="1a5be2447c5f8e1871cf33c0deb7f7fc3497f0421a20f7cbeff5ae9e2110d390" exitCode=0 Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.067646 4712 generic.go:334] "Generic (PLEG): container finished" podID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerID="06768615210ba6bfdb413cfc295ef737195ef11adc88154c988a7f12d5436a05" exitCode=143 Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.067931 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6843e22-e667-40c6-8d0e-e2484fcdff9a","Type":"ContainerDied","Data":"1a5be2447c5f8e1871cf33c0deb7f7fc3497f0421a20f7cbeff5ae9e2110d390"} Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.067985 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6843e22-e667-40c6-8d0e-e2484fcdff9a","Type":"ContainerDied","Data":"06768615210ba6bfdb413cfc295ef737195ef11adc88154c988a7f12d5436a05"} Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.467547 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.554550 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.570173 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-combined-ca-bundle\") pod \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.570222 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6843e22-e667-40c6-8d0e-e2484fcdff9a-logs\") pod \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.570274 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcqrb\" (UniqueName: \"kubernetes.io/projected/f6843e22-e667-40c6-8d0e-e2484fcdff9a-kube-api-access-rcqrb\") pod \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.570424 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-config-data\") pod \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.570496 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-nova-metadata-tls-certs\") pod \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\" (UID: \"f6843e22-e667-40c6-8d0e-e2484fcdff9a\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.571707 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6843e22-e667-40c6-8d0e-e2484fcdff9a-logs" (OuterVolumeSpecName: "logs") pod "f6843e22-e667-40c6-8d0e-e2484fcdff9a" (UID: "f6843e22-e667-40c6-8d0e-e2484fcdff9a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.600376 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6843e22-e667-40c6-8d0e-e2484fcdff9a-kube-api-access-rcqrb" (OuterVolumeSpecName: "kube-api-access-rcqrb") pod "f6843e22-e667-40c6-8d0e-e2484fcdff9a" (UID: "f6843e22-e667-40c6-8d0e-e2484fcdff9a"). InnerVolumeSpecName "kube-api-access-rcqrb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.661654 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-config-data" (OuterVolumeSpecName: "config-data") pod "f6843e22-e667-40c6-8d0e-e2484fcdff9a" (UID: "f6843e22-e667-40c6-8d0e-e2484fcdff9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.661768 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f6843e22-e667-40c6-8d0e-e2484fcdff9a" (UID: "f6843e22-e667-40c6-8d0e-e2484fcdff9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.671947 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-scripts\") pod \"a3709de8-50e0-480b-a152-ee1875e8ff4f\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672008 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-config-data\") pod \"a3709de8-50e0-480b-a152-ee1875e8ff4f\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672041 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-skfsr\" (UniqueName: \"kubernetes.io/projected/a3709de8-50e0-480b-a152-ee1875e8ff4f-kube-api-access-skfsr\") pod \"a3709de8-50e0-480b-a152-ee1875e8ff4f\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672131 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-combined-ca-bundle\") pod \"a3709de8-50e0-480b-a152-ee1875e8ff4f\" (UID: \"a3709de8-50e0-480b-a152-ee1875e8ff4f\") " Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672710 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672722 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f6843e22-e667-40c6-8d0e-e2484fcdff9a-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672732 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcqrb\" (UniqueName: \"kubernetes.io/projected/f6843e22-e667-40c6-8d0e-e2484fcdff9a-kube-api-access-rcqrb\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.672742 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.697974 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-scripts" (OuterVolumeSpecName: "scripts") pod "a3709de8-50e0-480b-a152-ee1875e8ff4f" (UID: "a3709de8-50e0-480b-a152-ee1875e8ff4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.711075 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3709de8-50e0-480b-a152-ee1875e8ff4f-kube-api-access-skfsr" (OuterVolumeSpecName: "kube-api-access-skfsr") pod "a3709de8-50e0-480b-a152-ee1875e8ff4f" (UID: "a3709de8-50e0-480b-a152-ee1875e8ff4f"). InnerVolumeSpecName "kube-api-access-skfsr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.723042 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "f6843e22-e667-40c6-8d0e-e2484fcdff9a" (UID: "f6843e22-e667-40c6-8d0e-e2484fcdff9a"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.725294 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-config-data" (OuterVolumeSpecName: "config-data") pod "a3709de8-50e0-480b-a152-ee1875e8ff4f" (UID: "a3709de8-50e0-480b-a152-ee1875e8ff4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.778131 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.778165 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.778175 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-skfsr\" (UniqueName: \"kubernetes.io/projected/a3709de8-50e0-480b-a152-ee1875e8ff4f-kube-api-access-skfsr\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.778187 4712 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6843e22-e667-40c6-8d0e-e2484fcdff9a-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.786984 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a3709de8-50e0-480b-a152-ee1875e8ff4f" (UID: "a3709de8-50e0-480b-a152-ee1875e8ff4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:19:57 crc kubenswrapper[4712]: I0130 17:19:57.879560 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a3709de8-50e0-480b-a152-ee1875e8ff4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.078813 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.078952 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-5cwz8" event={"ID":"a3709de8-50e0-480b-a152-ee1875e8ff4f","Type":"ContainerDied","Data":"e71c4b093740466028b1e13c251aa69e9018237930891bb1b539769d945002f8"} Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.079013 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e71c4b093740466028b1e13c251aa69e9018237930891bb1b539769d945002f8" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.081255 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"f6843e22-e667-40c6-8d0e-e2484fcdff9a","Type":"ContainerDied","Data":"58775ec4f5227c57e75b0868f461bf534e494994d2c723c449e1bf16aef237c4"} Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.081303 4712 scope.go:117] "RemoveContainer" containerID="1a5be2447c5f8e1871cf33c0deb7f7fc3497f0421a20f7cbeff5ae9e2110d390" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.081443 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.107922 4712 scope.go:117] "RemoveContainer" containerID="06768615210ba6bfdb413cfc295ef737195ef11adc88154c988a7f12d5436a05" Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.111953 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.116126 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.122540 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.122616 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="61452352-342c-4cba-8489-13d8a26ba14b" containerName="nova-scheduler-scheduler" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.138918 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.152675 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.176936 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.177339 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f564ed01-d852-40b5-853f-f79a37a114dc" containerName="nova-manage" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177356 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f564ed01-d852-40b5-853f-f79a37a114dc" containerName="nova-manage" Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.177391 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-log" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177401 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-log" Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.177420 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3709de8-50e0-480b-a152-ee1875e8ff4f" containerName="nova-cell1-conductor-db-sync" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177427 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3709de8-50e0-480b-a152-ee1875e8ff4f" containerName="nova-cell1-conductor-db-sync" Jan 30 17:19:58 crc kubenswrapper[4712]: E0130 17:19:58.177447 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-metadata" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177453 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-metadata" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177611 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f564ed01-d852-40b5-853f-f79a37a114dc" containerName="nova-manage" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177628 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-log" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177637 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" containerName="nova-metadata-metadata" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.177652 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3709de8-50e0-480b-a152-ee1875e8ff4f" containerName="nova-cell1-conductor-db-sync" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.178584 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.181314 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.181614 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.192243 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.205648 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.206979 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.211066 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.250244 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.300486 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.301045 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.301195 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.301295 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.302460 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-config-data\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.302547 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8f0931-676e-406e-92fd-d6d09a065cf9-logs\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.302828 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hn2xd\" (UniqueName: \"kubernetes.io/projected/3c8f0931-676e-406e-92fd-d6d09a065cf9-kube-api-access-hn2xd\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.303664 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h84d\" (UniqueName: \"kubernetes.io/projected/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-kube-api-access-5h84d\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405255 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405328 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405391 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-config-data\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405413 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8f0931-676e-406e-92fd-d6d09a065cf9-logs\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405467 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hn2xd\" (UniqueName: \"kubernetes.io/projected/3c8f0931-676e-406e-92fd-d6d09a065cf9-kube-api-access-hn2xd\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405521 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5h84d\" (UniqueName: \"kubernetes.io/projected/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-kube-api-access-5h84d\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405578 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.405609 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.406113 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8f0931-676e-406e-92fd-d6d09a065cf9-logs\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.411177 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.411779 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-config-data\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.411916 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.414478 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.426506 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.428820 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hn2xd\" (UniqueName: \"kubernetes.io/projected/3c8f0931-676e-406e-92fd-d6d09a065cf9-kube-api-access-hn2xd\") pod \"nova-metadata-0\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.430550 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5h84d\" (UniqueName: \"kubernetes.io/projected/5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6-kube-api-access-5h84d\") pod \"nova-cell1-conductor-0\" (UID: \"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6\") " pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.537908 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:19:58 crc kubenswrapper[4712]: I0130 17:19:58.550112 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 30 17:19:59 crc kubenswrapper[4712]: I0130 17:19:59.152425 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 30 17:19:59 crc kubenswrapper[4712]: I0130 17:19:59.303580 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:19:59 crc kubenswrapper[4712]: I0130 17:19:59.828376 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6843e22-e667-40c6-8d0e-e2484fcdff9a" path="/var/lib/kubelet/pods/f6843e22-e667-40c6-8d0e-e2484fcdff9a/volumes" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.138284 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8f0931-676e-406e-92fd-d6d09a065cf9","Type":"ContainerStarted","Data":"7b009d3b03aa306ea07f88e85058719f6d6428984ab9604b838edc55f9caac0d"} Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.138325 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8f0931-676e-406e-92fd-d6d09a065cf9","Type":"ContainerStarted","Data":"0630b320a5a3776024325401aeb687cdca25e5aa40f5315b83104471ff56069b"} Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.138336 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8f0931-676e-406e-92fd-d6d09a065cf9","Type":"ContainerStarted","Data":"491bd1bf9d348930841048652e60d985c0c59150878fb31abe7c63c1b56283f4"} Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.154371 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6","Type":"ContainerStarted","Data":"9d23fae194d1c68d44033dce1df5fb98af7d827b883be805543a14950e3d91f6"} Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.154458 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6","Type":"ContainerStarted","Data":"5b5aab68f726a658f1ece1c46c5515b080d4a6c7fe60eaee16f7e9f55aecf57d"} Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.154530 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.161399 4712 generic.go:334] "Generic (PLEG): container finished" podID="109193de-e767-47ce-a608-e0420d2d7a40" containerID="da940598861ab69bd38ec701a31da285229381878517fefd5bf162c99cde6cbe" exitCode=0 Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.161466 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"109193de-e767-47ce-a608-e0420d2d7a40","Type":"ContainerDied","Data":"da940598861ab69bd38ec701a31da285229381878517fefd5bf162c99cde6cbe"} Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.169761 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.169740122 podStartE2EDuration="2.169740122s" podCreationTimestamp="2026-01-30 17:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:00.165071939 +0000 UTC m=+1537.072081408" watchObservedRunningTime="2026-01-30 17:20:00.169740122 +0000 UTC m=+1537.076749591" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.189295 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.189259514 podStartE2EDuration="2.189259514s" podCreationTimestamp="2026-01-30 17:19:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:00.184259433 +0000 UTC m=+1537.091268902" watchObservedRunningTime="2026-01-30 17:20:00.189259514 +0000 UTC m=+1537.096268993" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.218394 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.356882 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/109193de-e767-47ce-a608-e0420d2d7a40-logs\") pod \"109193de-e767-47ce-a608-e0420d2d7a40\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.356979 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-config-data\") pod \"109193de-e767-47ce-a608-e0420d2d7a40\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.357561 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/109193de-e767-47ce-a608-e0420d2d7a40-logs" (OuterVolumeSpecName: "logs") pod "109193de-e767-47ce-a608-e0420d2d7a40" (UID: "109193de-e767-47ce-a608-e0420d2d7a40"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.357069 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-combined-ca-bundle\") pod \"109193de-e767-47ce-a608-e0420d2d7a40\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.357902 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6l8g\" (UniqueName: \"kubernetes.io/projected/109193de-e767-47ce-a608-e0420d2d7a40-kube-api-access-w6l8g\") pod \"109193de-e767-47ce-a608-e0420d2d7a40\" (UID: \"109193de-e767-47ce-a608-e0420d2d7a40\") " Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.358723 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/109193de-e767-47ce-a608-e0420d2d7a40-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.364391 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/109193de-e767-47ce-a608-e0420d2d7a40-kube-api-access-w6l8g" (OuterVolumeSpecName: "kube-api-access-w6l8g") pod "109193de-e767-47ce-a608-e0420d2d7a40" (UID: "109193de-e767-47ce-a608-e0420d2d7a40"). InnerVolumeSpecName "kube-api-access-w6l8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.390068 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-config-data" (OuterVolumeSpecName: "config-data") pod "109193de-e767-47ce-a608-e0420d2d7a40" (UID: "109193de-e767-47ce-a608-e0420d2d7a40"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.397058 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "109193de-e767-47ce-a608-e0420d2d7a40" (UID: "109193de-e767-47ce-a608-e0420d2d7a40"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.460943 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.461212 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/109193de-e767-47ce-a608-e0420d2d7a40-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:00 crc kubenswrapper[4712]: I0130 17:20:00.461355 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6l8g\" (UniqueName: \"kubernetes.io/projected/109193de-e767-47ce-a608-e0420d2d7a40-kube-api-access-w6l8g\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.174350 4712 generic.go:334] "Generic (PLEG): container finished" podID="61452352-342c-4cba-8489-13d8a26ba14b" containerID="3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34" exitCode=0 Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.174430 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61452352-342c-4cba-8489-13d8a26ba14b","Type":"ContainerDied","Data":"3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34"} Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.176635 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.177073 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"109193de-e767-47ce-a608-e0420d2d7a40","Type":"ContainerDied","Data":"d82e3a90c087dc37ded8d8846a8a679793f2a8f21cd200eb0f87ff5a59af0dad"} Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.177106 4712 scope.go:117] "RemoveContainer" containerID="da940598861ab69bd38ec701a31da285229381878517fefd5bf162c99cde6cbe" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.217863 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.237264 4712 scope.go:117] "RemoveContainer" containerID="f93958dc3ffde5076a74e8d6438d320d844c685a6c3ccee66ea915ebaf59be6b" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.241040 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.257865 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:01 crc kubenswrapper[4712]: E0130 17:20:01.258368 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-api" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.258394 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-api" Jan 30 17:20:01 crc kubenswrapper[4712]: E0130 17:20:01.258410 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-log" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.258419 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-log" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.258627 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-api" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.258653 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="109193de-e767-47ce-a608-e0420d2d7a40" containerName="nova-api-log" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.259896 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.264537 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.276935 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.388596 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdgxw\" (UniqueName: \"kubernetes.io/projected/26cd9519-8d6a-4475-ac46-6b107621f27e-kube-api-access-vdgxw\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.388976 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-config-data\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.389001 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cd9519-8d6a-4475-ac46-6b107621f27e-logs\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.389065 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.490980 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-config-data\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.491027 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cd9519-8d6a-4475-ac46-6b107621f27e-logs\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.491105 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.491241 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdgxw\" (UniqueName: \"kubernetes.io/projected/26cd9519-8d6a-4475-ac46-6b107621f27e-kube-api-access-vdgxw\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.491724 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cd9519-8d6a-4475-ac46-6b107621f27e-logs\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.498196 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-config-data\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.499279 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.510961 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdgxw\" (UniqueName: \"kubernetes.io/projected/26cd9519-8d6a-4475-ac46-6b107621f27e-kube-api-access-vdgxw\") pod \"nova-api-0\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.586111 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.596536 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.700442 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-config-data\") pod \"61452352-342c-4cba-8489-13d8a26ba14b\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.700528 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvhzg\" (UniqueName: \"kubernetes.io/projected/61452352-342c-4cba-8489-13d8a26ba14b-kube-api-access-nvhzg\") pod \"61452352-342c-4cba-8489-13d8a26ba14b\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.700598 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-combined-ca-bundle\") pod \"61452352-342c-4cba-8489-13d8a26ba14b\" (UID: \"61452352-342c-4cba-8489-13d8a26ba14b\") " Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.718710 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61452352-342c-4cba-8489-13d8a26ba14b-kube-api-access-nvhzg" (OuterVolumeSpecName: "kube-api-access-nvhzg") pod "61452352-342c-4cba-8489-13d8a26ba14b" (UID: "61452352-342c-4cba-8489-13d8a26ba14b"). InnerVolumeSpecName "kube-api-access-nvhzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.750080 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-config-data" (OuterVolumeSpecName: "config-data") pod "61452352-342c-4cba-8489-13d8a26ba14b" (UID: "61452352-342c-4cba-8489-13d8a26ba14b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.767900 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "61452352-342c-4cba-8489-13d8a26ba14b" (UID: "61452352-342c-4cba-8489-13d8a26ba14b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.807397 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nvhzg\" (UniqueName: \"kubernetes.io/projected/61452352-342c-4cba-8489-13d8a26ba14b-kube-api-access-nvhzg\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.807443 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.807479 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/61452352-342c-4cba-8489-13d8a26ba14b-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:01 crc kubenswrapper[4712]: I0130 17:20:01.826835 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="109193de-e767-47ce-a608-e0420d2d7a40" path="/var/lib/kubelet/pods/109193de-e767-47ce-a608-e0420d2d7a40/volumes" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.055220 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.199360 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.200109 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"61452352-342c-4cba-8489-13d8a26ba14b","Type":"ContainerDied","Data":"5913a9fa17896fe52affca089db206e8d32584614dafb93f55340b0993edbc68"} Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.200143 4712 scope.go:117] "RemoveContainer" containerID="3f32fd7a9a28218251c304278e16b0d20a29e5c08777326a48719eb9fc441c34" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.235751 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.259719 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.296239 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:02 crc kubenswrapper[4712]: E0130 17:20:02.297216 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61452352-342c-4cba-8489-13d8a26ba14b" containerName="nova-scheduler-scheduler" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.297238 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="61452352-342c-4cba-8489-13d8a26ba14b" containerName="nova-scheduler-scheduler" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.297937 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="61452352-342c-4cba-8489-13d8a26ba14b" containerName="nova-scheduler-scheduler" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.299188 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.302581 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.337158 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.378611 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.440753 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.440817 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdjj\" (UniqueName: \"kubernetes.io/projected/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-kube-api-access-mvdjj\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.440894 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-config-data\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.542542 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.542596 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdjj\" (UniqueName: \"kubernetes.io/projected/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-kube-api-access-mvdjj\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.542659 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-config-data\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.548508 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-config-data\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.550524 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.575671 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdjj\" (UniqueName: \"kubernetes.io/projected/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-kube-api-access-mvdjj\") pod \"nova-scheduler-0\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:02 crc kubenswrapper[4712]: I0130 17:20:02.643156 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:03 crc kubenswrapper[4712]: W0130 17:20:03.219422 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6b1d7ef_cd70_40ba_a25a_7f80b07c16db.slice/crio-fdb37de2a6e7ef20a85dcbce111ea76b72b4b1bafadd8bb9d48245dde33819a5 WatchSource:0}: Error finding container fdb37de2a6e7ef20a85dcbce111ea76b72b4b1bafadd8bb9d48245dde33819a5: Status 404 returned error can't find the container with id fdb37de2a6e7ef20a85dcbce111ea76b72b4b1bafadd8bb9d48245dde33819a5 Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.225044 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.231082 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26cd9519-8d6a-4475-ac46-6b107621f27e","Type":"ContainerStarted","Data":"01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a"} Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.231133 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26cd9519-8d6a-4475-ac46-6b107621f27e","Type":"ContainerStarted","Data":"8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897"} Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.231145 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26cd9519-8d6a-4475-ac46-6b107621f27e","Type":"ContainerStarted","Data":"a58b89a6b720194eb27b3ea7d3dae4136e9a9357868f4b4a6977ead2e780b201"} Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.538386 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.538491 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.812656 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61452352-342c-4cba-8489-13d8a26ba14b" path="/var/lib/kubelet/pods/61452352-342c-4cba-8489-13d8a26ba14b/volumes" Jan 30 17:20:03 crc kubenswrapper[4712]: I0130 17:20:03.834012 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.833986034 podStartE2EDuration="2.833986034s" podCreationTimestamp="2026-01-30 17:20:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:03.254310256 +0000 UTC m=+1540.161319725" watchObservedRunningTime="2026-01-30 17:20:03.833986034 +0000 UTC m=+1540.740995503" Jan 30 17:20:04 crc kubenswrapper[4712]: I0130 17:20:04.247497 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db","Type":"ContainerStarted","Data":"6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff"} Jan 30 17:20:04 crc kubenswrapper[4712]: I0130 17:20:04.247857 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db","Type":"ContainerStarted","Data":"fdb37de2a6e7ef20a85dcbce111ea76b72b4b1bafadd8bb9d48245dde33819a5"} Jan 30 17:20:05 crc kubenswrapper[4712]: I0130 17:20:05.073769 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:20:05 crc kubenswrapper[4712]: I0130 17:20:05.073929 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:20:05 crc kubenswrapper[4712]: I0130 17:20:05.075440 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"33da2560c2b92663910c7a5cee80606f93009c5b03eae1dcf70e4946299645fb"} pod="openstack/horizon-56f8b66d48-7wr47" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:20:05 crc kubenswrapper[4712]: I0130 17:20:05.075504 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" containerID="cri-o://33da2560c2b92663910c7a5cee80606f93009c5b03eae1dcf70e4946299645fb" gracePeriod=30 Jan 30 17:20:05 crc kubenswrapper[4712]: I0130 17:20:05.354341 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:20:06 crc kubenswrapper[4712]: I0130 17:20:06.271161 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:20:06 crc kubenswrapper[4712]: I0130 17:20:06.271217 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:20:07 crc kubenswrapper[4712]: I0130 17:20:07.644070 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:20:08 crc kubenswrapper[4712]: I0130 17:20:08.538677 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:20:08 crc kubenswrapper[4712]: I0130 17:20:08.538725 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:20:08 crc kubenswrapper[4712]: I0130 17:20:08.577332 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 30 17:20:08 crc kubenswrapper[4712]: I0130 17:20:08.593613 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=6.593595445 podStartE2EDuration="6.593595445s" podCreationTimestamp="2026-01-30 17:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:04.287841399 +0000 UTC m=+1541.194850878" watchObservedRunningTime="2026-01-30 17:20:08.593595445 +0000 UTC m=+1545.500604914" Jan 30 17:20:09 crc kubenswrapper[4712]: I0130 17:20:09.552016 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:09 crc kubenswrapper[4712]: I0130 17:20:09.552275 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:09 crc kubenswrapper[4712]: I0130 17:20:09.717341 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:20:09 crc kubenswrapper[4712]: I0130 17:20:09.717543 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" containerName="kube-state-metrics" containerID="cri-o://d4d0184806d44cb107882cf97cfdd22f429f4ff19dc32d6419d8f4820d31d23f" gracePeriod=30 Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.308416 4712 generic.go:334] "Generic (PLEG): container finished" podID="e88ea344-4eb8-4174-9ce7-855aa6afed59" containerID="d4d0184806d44cb107882cf97cfdd22f429f4ff19dc32d6419d8f4820d31d23f" exitCode=2 Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.308758 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e88ea344-4eb8-4174-9ce7-855aa6afed59","Type":"ContainerDied","Data":"d4d0184806d44cb107882cf97cfdd22f429f4ff19dc32d6419d8f4820d31d23f"} Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.308814 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e88ea344-4eb8-4174-9ce7-855aa6afed59","Type":"ContainerDied","Data":"7a83ab6ca44a66e42f2ead6aa92abb7b4e0ccabf0c870763bda352e356a12d03"} Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.308829 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a83ab6ca44a66e42f2ead6aa92abb7b4e0ccabf0c870763bda352e356a12d03" Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.324483 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.439498 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxzzd\" (UniqueName: \"kubernetes.io/projected/e88ea344-4eb8-4174-9ce7-855aa6afed59-kube-api-access-gxzzd\") pod \"e88ea344-4eb8-4174-9ce7-855aa6afed59\" (UID: \"e88ea344-4eb8-4174-9ce7-855aa6afed59\") " Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.461028 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e88ea344-4eb8-4174-9ce7-855aa6afed59-kube-api-access-gxzzd" (OuterVolumeSpecName: "kube-api-access-gxzzd") pod "e88ea344-4eb8-4174-9ce7-855aa6afed59" (UID: "e88ea344-4eb8-4174-9ce7-855aa6afed59"). InnerVolumeSpecName "kube-api-access-gxzzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:10 crc kubenswrapper[4712]: I0130 17:20:10.541986 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxzzd\" (UniqueName: \"kubernetes.io/projected/e88ea344-4eb8-4174-9ce7-855aa6afed59-kube-api-access-gxzzd\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.317547 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.352248 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.362882 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.383933 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:20:11 crc kubenswrapper[4712]: E0130 17:20:11.384401 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" containerName="kube-state-metrics" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.384417 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" containerName="kube-state-metrics" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.384587 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" containerName="kube-state-metrics" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.385223 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.387287 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.387856 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.437303 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.558450 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.558510 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mlkk\" (UniqueName: \"kubernetes.io/projected/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-api-access-2mlkk\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.558592 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.558728 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.598569 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.598618 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.661156 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.661225 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.661434 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.661473 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mlkk\" (UniqueName: \"kubernetes.io/projected/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-api-access-2mlkk\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.669745 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.674037 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.687446 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.704328 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mlkk\" (UniqueName: \"kubernetes.io/projected/19b27a49-3b3b-434e-b8c7-133e4e120569-kube-api-access-2mlkk\") pod \"kube-state-metrics-0\" (UID: \"19b27a49-3b3b-434e-b8c7-133e4e120569\") " pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.716359 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 30 17:20:11 crc kubenswrapper[4712]: I0130 17:20:11.816047 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e88ea344-4eb8-4174-9ce7-855aa6afed59" path="/var/lib/kubelet/pods/e88ea344-4eb8-4174-9ce7-855aa6afed59/volumes" Jan 30 17:20:12 crc kubenswrapper[4712]: W0130 17:20:12.288955 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod19b27a49_3b3b_434e_b8c7_133e4e120569.slice/crio-7122617dc4a33cdeb588d0ae6d3f54d3d0cda3f9b7bb27b6a5a56cf96d55940f WatchSource:0}: Error finding container 7122617dc4a33cdeb588d0ae6d3f54d3d0cda3f9b7bb27b6a5a56cf96d55940f: Status 404 returned error can't find the container with id 7122617dc4a33cdeb588d0ae6d3f54d3d0cda3f9b7bb27b6a5a56cf96d55940f Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.290694 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.332115 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"19b27a49-3b3b-434e-b8c7-133e4e120569","Type":"ContainerStarted","Data":"7122617dc4a33cdeb588d0ae6d3f54d3d0cda3f9b7bb27b6a5a56cf96d55940f"} Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.643440 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.688986 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.689360 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.215:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.702002 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.838018 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.838283 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-central-agent" containerID="cri-o://f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7" gracePeriod=30 Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.838597 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-notification-agent" containerID="cri-o://ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af" gracePeriod=30 Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.838667 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="sg-core" containerID="cri-o://966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185" gracePeriod=30 Jan 30 17:20:12 crc kubenswrapper[4712]: I0130 17:20:12.838484 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="proxy-httpd" containerID="cri-o://5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c" gracePeriod=30 Jan 30 17:20:13 crc kubenswrapper[4712]: I0130 17:20:13.344017 4712 generic.go:334] "Generic (PLEG): container finished" podID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerID="966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185" exitCode=2 Jan 30 17:20:13 crc kubenswrapper[4712]: I0130 17:20:13.344086 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerDied","Data":"966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185"} Jan 30 17:20:13 crc kubenswrapper[4712]: I0130 17:20:13.345993 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"19b27a49-3b3b-434e-b8c7-133e4e120569","Type":"ContainerStarted","Data":"d98254c582de2250997644f18ca519df6ac6e21363fbed125af4b578ba29ebba"} Jan 30 17:20:13 crc kubenswrapper[4712]: I0130 17:20:13.346259 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 30 17:20:13 crc kubenswrapper[4712]: I0130 17:20:13.377679 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.935478043 podStartE2EDuration="2.377660077s" podCreationTimestamp="2026-01-30 17:20:11 +0000 UTC" firstStartedPulling="2026-01-30 17:20:12.291176783 +0000 UTC m=+1549.198186252" lastFinishedPulling="2026-01-30 17:20:12.733358827 +0000 UTC m=+1549.640368286" observedRunningTime="2026-01-30 17:20:13.377655577 +0000 UTC m=+1550.284665046" watchObservedRunningTime="2026-01-30 17:20:13.377660077 +0000 UTC m=+1550.284669546" Jan 30 17:20:13 crc kubenswrapper[4712]: I0130 17:20:13.388220 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:20:14 crc kubenswrapper[4712]: I0130 17:20:14.359999 4712 generic.go:334] "Generic (PLEG): container finished" podID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerID="5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c" exitCode=0 Jan 30 17:20:14 crc kubenswrapper[4712]: I0130 17:20:14.360051 4712 generic.go:334] "Generic (PLEG): container finished" podID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerID="f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7" exitCode=0 Jan 30 17:20:14 crc kubenswrapper[4712]: I0130 17:20:14.360049 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerDied","Data":"5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c"} Jan 30 17:20:14 crc kubenswrapper[4712]: I0130 17:20:14.360104 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerDied","Data":"f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7"} Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.199000 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276181 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-run-httpd\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276221 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-log-httpd\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276291 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-config-data\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276322 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-scripts\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276365 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-combined-ca-bundle\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276525 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-sg-core-conf-yaml\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.276566 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8qhn2\" (UniqueName: \"kubernetes.io/projected/74969a69-d6be-4c12-9dd0-7a529e73737d-kube-api-access-8qhn2\") pod \"74969a69-d6be-4c12-9dd0-7a529e73737d\" (UID: \"74969a69-d6be-4c12-9dd0-7a529e73737d\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.277584 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.277711 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.278311 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.284066 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-scripts" (OuterVolumeSpecName: "scripts") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.286246 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74969a69-d6be-4c12-9dd0-7a529e73737d-kube-api-access-8qhn2" (OuterVolumeSpecName: "kube-api-access-8qhn2") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "kube-api-access-8qhn2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.374810 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.382557 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-config-data\") pod \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.382699 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4vd9\" (UniqueName: \"kubernetes.io/projected/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-kube-api-access-n4vd9\") pod \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.382835 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-combined-ca-bundle\") pod \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\" (UID: \"f4f33da2-dc23-40f0-8a42-d9f557f63a5f\") " Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.383413 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.383463 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.383478 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8qhn2\" (UniqueName: \"kubernetes.io/projected/74969a69-d6be-4c12-9dd0-7a529e73737d-kube-api-access-8qhn2\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.383490 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.383500 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/74969a69-d6be-4c12-9dd0-7a529e73737d-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.418578 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-kube-api-access-n4vd9" (OuterVolumeSpecName: "kube-api-access-n4vd9") pod "f4f33da2-dc23-40f0-8a42-d9f557f63a5f" (UID: "f4f33da2-dc23-40f0-8a42-d9f557f63a5f"). InnerVolumeSpecName "kube-api-access-n4vd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.423984 4712 generic.go:334] "Generic (PLEG): container finished" podID="f4f33da2-dc23-40f0-8a42-d9f557f63a5f" containerID="a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55" exitCode=137 Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.424075 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4f33da2-dc23-40f0-8a42-d9f557f63a5f","Type":"ContainerDied","Data":"a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55"} Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.424109 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f4f33da2-dc23-40f0-8a42-d9f557f63a5f","Type":"ContainerDied","Data":"d8f6091436d49830e03cd0410a7bcd2b22e3fe6dc0ad430bf86fdf7d795b3970"} Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.424131 4712 scope.go:117] "RemoveContainer" containerID="a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.424284 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.432002 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4f33da2-dc23-40f0-8a42-d9f557f63a5f" (UID: "f4f33da2-dc23-40f0-8a42-d9f557f63a5f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.453764 4712 generic.go:334] "Generic (PLEG): container finished" podID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerID="ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af" exitCode=0 Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.453836 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerDied","Data":"ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af"} Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.453868 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"74969a69-d6be-4c12-9dd0-7a529e73737d","Type":"ContainerDied","Data":"d5081a015e2ccb071cb02e39e6d2ca9393babaa1e6920d731aafed1fbf581378"} Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.453952 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.455537 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-config-data" (OuterVolumeSpecName: "config-data") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.469351 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "74969a69-d6be-4c12-9dd0-7a529e73737d" (UID: "74969a69-d6be-4c12-9dd0-7a529e73737d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.474169 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-config-data" (OuterVolumeSpecName: "config-data") pod "f4f33da2-dc23-40f0-8a42-d9f557f63a5f" (UID: "f4f33da2-dc23-40f0-8a42-d9f557f63a5f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.485326 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.485367 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4vd9\" (UniqueName: \"kubernetes.io/projected/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-kube-api-access-n4vd9\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.485382 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4f33da2-dc23-40f0-8a42-d9f557f63a5f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.485394 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.485405 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74969a69-d6be-4c12-9dd0-7a529e73737d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.588609 4712 scope.go:117] "RemoveContainer" containerID="a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.589122 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55\": container with ID starting with a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55 not found: ID does not exist" containerID="a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.589147 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55"} err="failed to get container status \"a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55\": rpc error: code = NotFound desc = could not find container \"a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55\": container with ID starting with a1ca95485220e55be66ff4480ff388bc02477c10e90e1b56a1a433ab1d333b55 not found: ID does not exist" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.589165 4712 scope.go:117] "RemoveContainer" containerID="5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.615560 4712 scope.go:117] "RemoveContainer" containerID="966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.636291 4712 scope.go:117] "RemoveContainer" containerID="ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.657911 4712 scope.go:117] "RemoveContainer" containerID="f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.678723 4712 scope.go:117] "RemoveContainer" containerID="5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.679278 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c\": container with ID starting with 5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c not found: ID does not exist" containerID="5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.679311 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c"} err="failed to get container status \"5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c\": rpc error: code = NotFound desc = could not find container \"5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c\": container with ID starting with 5274d9ccbc17089ed18833e83a0a70b8f5b150d4df89f4f4f9478011a9235c0c not found: ID does not exist" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.679333 4712 scope.go:117] "RemoveContainer" containerID="966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.679645 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185\": container with ID starting with 966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185 not found: ID does not exist" containerID="966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.679710 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185"} err="failed to get container status \"966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185\": rpc error: code = NotFound desc = could not find container \"966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185\": container with ID starting with 966935acba9b313d521e4434365b75a25e106159cddfd5ec62ad99c7f4125185 not found: ID does not exist" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.679744 4712 scope.go:117] "RemoveContainer" containerID="ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.680218 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af\": container with ID starting with ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af not found: ID does not exist" containerID="ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.680244 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af"} err="failed to get container status \"ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af\": rpc error: code = NotFound desc = could not find container \"ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af\": container with ID starting with ccce0aa1c2b9c2880dd944d1d8fc1f206bde307188936cae3089728759eb36af not found: ID does not exist" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.680261 4712 scope.go:117] "RemoveContainer" containerID="f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.680526 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7\": container with ID starting with f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7 not found: ID does not exist" containerID="f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.680562 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7"} err="failed to get container status \"f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7\": rpc error: code = NotFound desc = could not find container \"f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7\": container with ID starting with f775a9c6952ba67f78241188286ce1364d6e91657d5332d876d10e4f4952e5b7 not found: ID does not exist" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.757899 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.769002 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791121 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.791563 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="sg-core" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791579 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="sg-core" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.791598 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4f33da2-dc23-40f0-8a42-d9f557f63a5f" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791604 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4f33da2-dc23-40f0-8a42-d9f557f63a5f" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.791614 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-notification-agent" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791621 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-notification-agent" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.791669 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-central-agent" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791677 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-central-agent" Jan 30 17:20:17 crc kubenswrapper[4712]: E0130 17:20:17.791694 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="proxy-httpd" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791704 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="proxy-httpd" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791901 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-notification-agent" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791912 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4f33da2-dc23-40f0-8a42-d9f557f63a5f" containerName="nova-cell1-novncproxy-novncproxy" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791926 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="ceilometer-central-agent" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791939 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="sg-core" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.791945 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" containerName="proxy-httpd" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.792561 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.795905 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.796117 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.796237 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.843774 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4f33da2-dc23-40f0-8a42-d9f557f63a5f" path="/var/lib/kubelet/pods/f4f33da2-dc23-40f0-8a42-d9f557f63a5f/volumes" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.844443 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.844472 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.844489 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.874477 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.877707 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.886611 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.889356 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.889564 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.896630 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.896721 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc6b7\" (UniqueName: \"kubernetes.io/projected/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-kube-api-access-gc6b7\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.896815 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.896893 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.897069 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.906034 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998720 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998839 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9dk\" (UniqueName: \"kubernetes.io/projected/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-kube-api-access-4s9dk\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998873 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-scripts\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998899 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-log-httpd\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998924 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998966 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.998991 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-config-data\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.999037 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-run-httpd\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.999062 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.999103 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.999128 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.999161 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc6b7\" (UniqueName: \"kubernetes.io/projected/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-kube-api-access-gc6b7\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:17 crc kubenswrapper[4712]: I0130 17:20:17.999206 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.005522 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.005568 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.013637 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.016570 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.019980 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc6b7\" (UniqueName: \"kubernetes.io/projected/6b7f0bd2-aace-43a5-9214-75d73cd3fbe1-kube-api-access-gc6b7\") pod \"nova-cell1-novncproxy-0\" (UID: \"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1\") " pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.100771 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s9dk\" (UniqueName: \"kubernetes.io/projected/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-kube-api-access-4s9dk\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.100852 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-scripts\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.100881 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-log-httpd\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.100909 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.100935 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-config-data\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.100975 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-run-httpd\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.101001 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.101061 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.101397 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-log-httpd\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.101665 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-run-httpd\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.105752 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.106347 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.107171 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-scripts\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.107860 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-config-data\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.112674 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.119908 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s9dk\" (UniqueName: \"kubernetes.io/projected/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-kube-api-access-4s9dk\") pod \"ceilometer-0\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.194959 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.221674 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.543708 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.552483 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.567129 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.823850 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:18 crc kubenswrapper[4712]: I0130 17:20:18.831683 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 30 17:20:18 crc kubenswrapper[4712]: W0130 17:20:18.858094 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b7f0bd2_aace_43a5_9214_75d73cd3fbe1.slice/crio-ea4e32d76e380d1ef11f173fa0b14c471a261f5fe5663f136fccf41427f272d8 WatchSource:0}: Error finding container ea4e32d76e380d1ef11f173fa0b14c471a261f5fe5663f136fccf41427f272d8: Status 404 returned error can't find the container with id ea4e32d76e380d1ef11f173fa0b14c471a261f5fe5663f136fccf41427f272d8 Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.486487 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1","Type":"ContainerStarted","Data":"c75a8a38d60b3ffea9049ad54696a9a016b4b4ff4cd13a58cac59be6d68d7553"} Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.487086 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"6b7f0bd2-aace-43a5-9214-75d73cd3fbe1","Type":"ContainerStarted","Data":"ea4e32d76e380d1ef11f173fa0b14c471a261f5fe5663f136fccf41427f272d8"} Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.488365 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerStarted","Data":"4bb83581a1d2614910ef4e8fab9322bef7192632a26d3a3d1f19583f9393eb18"} Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.488423 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerStarted","Data":"3edcb034b1e95d8088e0b320ab6a736f70ad99e66e948b14e46b43903da74e7a"} Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.496776 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.510426 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.510405365 podStartE2EDuration="2.510405365s" podCreationTimestamp="2026-01-30 17:20:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:19.504989804 +0000 UTC m=+1556.411999273" watchObservedRunningTime="2026-01-30 17:20:19.510405365 +0000 UTC m=+1556.417414834" Jan 30 17:20:19 crc kubenswrapper[4712]: I0130 17:20:19.810709 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74969a69-d6be-4c12-9dd0-7a529e73737d" path="/var/lib/kubelet/pods/74969a69-d6be-4c12-9dd0-7a529e73737d/volumes" Jan 30 17:20:20 crc kubenswrapper[4712]: I0130 17:20:20.358018 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:20 crc kubenswrapper[4712]: I0130 17:20:20.358385 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:20:20 crc kubenswrapper[4712]: I0130 17:20:20.359298 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="horizon" containerStatusID={"Type":"cri-o","ID":"03c2090f070ef32f4daf04d3eeaf131ceca5e16369eb0275c32dfc9aaf604b1e"} pod="openstack/horizon-64655dbc44-pvj2c" containerMessage="Container horizon failed startup probe, will be restarted" Jan 30 17:20:20 crc kubenswrapper[4712]: I0130 17:20:20.359351 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" containerID="cri-o://03c2090f070ef32f4daf04d3eeaf131ceca5e16369eb0275c32dfc9aaf604b1e" gracePeriod=30 Jan 30 17:20:20 crc kubenswrapper[4712]: I0130 17:20:20.505558 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerStarted","Data":"dc8c1657f0e6eb393fa5465fb2fcc5791ef27bc1037302e094a2392431da2adf"} Jan 30 17:20:21 crc kubenswrapper[4712]: I0130 17:20:21.518631 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerStarted","Data":"4034f571d41bb51708666b45d8ddf1abd0be7fdbf7e3e8cc0a06d879ac3353d9"} Jan 30 17:20:21 crc kubenswrapper[4712]: I0130 17:20:21.606172 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:20:21 crc kubenswrapper[4712]: I0130 17:20:21.606343 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:20:21 crc kubenswrapper[4712]: I0130 17:20:21.606712 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:20:21 crc kubenswrapper[4712]: I0130 17:20:21.613301 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:20:21 crc kubenswrapper[4712]: I0130 17:20:21.735975 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.526486 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.535259 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.762014 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l"] Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.765550 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.781364 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l"] Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.892075 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.892190 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.892218 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.892239 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.892273 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.892343 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74ktx\" (UniqueName: \"kubernetes.io/projected/fec70295-88d5-49c5-9d39-e9bee0a17010-kube-api-access-74ktx\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.993741 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.993840 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.993897 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.993933 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.994036 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74ktx\" (UniqueName: \"kubernetes.io/projected/fec70295-88d5-49c5-9d39-e9bee0a17010-kube-api-access-74ktx\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.994070 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.995136 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-sb\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.995786 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-swift-storage-0\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.996317 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-config\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.996882 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-svc\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:22 crc kubenswrapper[4712]: I0130 17:20:22.997486 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-nb\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:23 crc kubenswrapper[4712]: I0130 17:20:23.039075 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74ktx\" (UniqueName: \"kubernetes.io/projected/fec70295-88d5-49c5-9d39-e9bee0a17010-kube-api-access-74ktx\") pod \"dnsmasq-dns-6b7bbf7cf9-dtb9l\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:23 crc kubenswrapper[4712]: I0130 17:20:23.082865 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:23 crc kubenswrapper[4712]: I0130 17:20:23.195357 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:23.785697 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l"] Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:24.550271 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerStarted","Data":"efa8f423c92b2bf581e646157f666a01e4f5bb6574a16246f4bd5ec0db475800"} Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:24.551051 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:24.552851 4712 generic.go:334] "Generic (PLEG): container finished" podID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerID="be14d4d1f73c9215675c5638c5633048e4931006e15fe5c4bf57f153fcf59399" exitCode=0 Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:24.552976 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" event={"ID":"fec70295-88d5-49c5-9d39-e9bee0a17010","Type":"ContainerDied","Data":"be14d4d1f73c9215675c5638c5633048e4931006e15fe5c4bf57f153fcf59399"} Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:24.553003 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" event={"ID":"fec70295-88d5-49c5-9d39-e9bee0a17010","Type":"ContainerStarted","Data":"8e29e263e528b3cf8d926d20f09e7d2e0aedb8e04ffcde16a5f312e4ebd0839f"} Jan 30 17:20:24 crc kubenswrapper[4712]: I0130 17:20:24.636363 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.876538328 podStartE2EDuration="7.636339695s" podCreationTimestamp="2026-01-30 17:20:17 +0000 UTC" firstStartedPulling="2026-01-30 17:20:18.848685432 +0000 UTC m=+1555.755694901" lastFinishedPulling="2026-01-30 17:20:23.608486799 +0000 UTC m=+1560.515496268" observedRunningTime="2026-01-30 17:20:24.572687366 +0000 UTC m=+1561.479696835" watchObservedRunningTime="2026-01-30 17:20:24.636339695 +0000 UTC m=+1561.543349164" Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.494319 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.576179 4712 generic.go:334] "Generic (PLEG): container finished" podID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerID="03c2090f070ef32f4daf04d3eeaf131ceca5e16369eb0275c32dfc9aaf604b1e" exitCode=0 Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.576257 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerDied","Data":"03c2090f070ef32f4daf04d3eeaf131ceca5e16369eb0275c32dfc9aaf604b1e"} Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.576334 4712 scope.go:117] "RemoveContainer" containerID="81106e51e98ee42b57283673e3cf02537243b70df68ffb3d9849db1d90c861a3" Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.581076 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" event={"ID":"fec70295-88d5-49c5-9d39-e9bee0a17010","Type":"ContainerStarted","Data":"d3cd43e559578a4aba8f3735361377917d67f663e7e17a8721fef53a4b01643e"} Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.581260 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-log" containerID="cri-o://8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897" gracePeriod=30 Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.581521 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.581587 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-api" containerID="cri-o://01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a" gracePeriod=30 Jan 30 17:20:25 crc kubenswrapper[4712]: I0130 17:20:25.614078 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" podStartSLOduration=3.614052339 podStartE2EDuration="3.614052339s" podCreationTimestamp="2026-01-30 17:20:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:25.608521625 +0000 UTC m=+1562.515531094" watchObservedRunningTime="2026-01-30 17:20:25.614052339 +0000 UTC m=+1562.521061808" Jan 30 17:20:26 crc kubenswrapper[4712]: I0130 17:20:26.590540 4712 generic.go:334] "Generic (PLEG): container finished" podID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerID="8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897" exitCode=143 Jan 30 17:20:26 crc kubenswrapper[4712]: I0130 17:20:26.590651 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26cd9519-8d6a-4475-ac46-6b107621f27e","Type":"ContainerDied","Data":"8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897"} Jan 30 17:20:26 crc kubenswrapper[4712]: I0130 17:20:26.594316 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-64655dbc44-pvj2c" event={"ID":"6a28b495-ecf0-409e-9558-ee794a46dbd1","Type":"ContainerStarted","Data":"1d590dded68820b11c29a1b0790d0078259fa99bd5401b808a03f2a0d0d56bbf"} Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.053677 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.054268 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-central-agent" containerID="cri-o://4bb83581a1d2614910ef4e8fab9322bef7192632a26d3a3d1f19583f9393eb18" gracePeriod=30 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.054347 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="sg-core" containerID="cri-o://4034f571d41bb51708666b45d8ddf1abd0be7fdbf7e3e8cc0a06d879ac3353d9" gracePeriod=30 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.054364 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-notification-agent" containerID="cri-o://dc8c1657f0e6eb393fa5465fb2fcc5791ef27bc1037302e094a2392431da2adf" gracePeriod=30 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.054488 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="proxy-httpd" containerID="cri-o://efa8f423c92b2bf581e646157f666a01e4f5bb6574a16246f4bd5ec0db475800" gracePeriod=30 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.195193 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.220812 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.633905 4712 generic.go:334] "Generic (PLEG): container finished" podID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerID="efa8f423c92b2bf581e646157f666a01e4f5bb6574a16246f4bd5ec0db475800" exitCode=0 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.633936 4712 generic.go:334] "Generic (PLEG): container finished" podID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerID="4034f571d41bb51708666b45d8ddf1abd0be7fdbf7e3e8cc0a06d879ac3353d9" exitCode=2 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.633944 4712 generic.go:334] "Generic (PLEG): container finished" podID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerID="dc8c1657f0e6eb393fa5465fb2fcc5791ef27bc1037302e094a2392431da2adf" exitCode=0 Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.635117 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerDied","Data":"efa8f423c92b2bf581e646157f666a01e4f5bb6574a16246f4bd5ec0db475800"} Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.635189 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerDied","Data":"4034f571d41bb51708666b45d8ddf1abd0be7fdbf7e3e8cc0a06d879ac3353d9"} Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.635210 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerDied","Data":"dc8c1657f0e6eb393fa5465fb2fcc5791ef27bc1037302e094a2392431da2adf"} Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.648693 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.875543 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-tm2l4"] Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.877171 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.879513 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.883751 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.886949 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tm2l4"] Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.996718 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w294k\" (UniqueName: \"kubernetes.io/projected/07e1f6ad-a075-4777-a81a-d021d3b25b37-kube-api-access-w294k\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.996863 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.996913 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-config-data\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:28 crc kubenswrapper[4712]: I0130 17:20:28.996960 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-scripts\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.100297 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w294k\" (UniqueName: \"kubernetes.io/projected/07e1f6ad-a075-4777-a81a-d021d3b25b37-kube-api-access-w294k\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.100397 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.100442 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-config-data\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.100482 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-scripts\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.108048 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.108047 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-config-data\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.125934 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-scripts\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.148423 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w294k\" (UniqueName: \"kubernetes.io/projected/07e1f6ad-a075-4777-a81a-d021d3b25b37-kube-api-access-w294k\") pod \"nova-cell1-cell-mapping-tm2l4\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.252191 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.407116 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.507526 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vdgxw\" (UniqueName: \"kubernetes.io/projected/26cd9519-8d6a-4475-ac46-6b107621f27e-kube-api-access-vdgxw\") pod \"26cd9519-8d6a-4475-ac46-6b107621f27e\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.507582 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cd9519-8d6a-4475-ac46-6b107621f27e-logs\") pod \"26cd9519-8d6a-4475-ac46-6b107621f27e\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.507646 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-combined-ca-bundle\") pod \"26cd9519-8d6a-4475-ac46-6b107621f27e\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.507665 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-config-data\") pod \"26cd9519-8d6a-4475-ac46-6b107621f27e\" (UID: \"26cd9519-8d6a-4475-ac46-6b107621f27e\") " Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.509431 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26cd9519-8d6a-4475-ac46-6b107621f27e-logs" (OuterVolumeSpecName: "logs") pod "26cd9519-8d6a-4475-ac46-6b107621f27e" (UID: "26cd9519-8d6a-4475-ac46-6b107621f27e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.509828 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/26cd9519-8d6a-4475-ac46-6b107621f27e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.527514 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26cd9519-8d6a-4475-ac46-6b107621f27e-kube-api-access-vdgxw" (OuterVolumeSpecName: "kube-api-access-vdgxw") pod "26cd9519-8d6a-4475-ac46-6b107621f27e" (UID: "26cd9519-8d6a-4475-ac46-6b107621f27e"). InnerVolumeSpecName "kube-api-access-vdgxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.574715 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-config-data" (OuterVolumeSpecName: "config-data") pod "26cd9519-8d6a-4475-ac46-6b107621f27e" (UID: "26cd9519-8d6a-4475-ac46-6b107621f27e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.580515 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "26cd9519-8d6a-4475-ac46-6b107621f27e" (UID: "26cd9519-8d6a-4475-ac46-6b107621f27e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.611920 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vdgxw\" (UniqueName: \"kubernetes.io/projected/26cd9519-8d6a-4475-ac46-6b107621f27e-kube-api-access-vdgxw\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.611955 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.611969 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26cd9519-8d6a-4475-ac46-6b107621f27e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.666534 4712 generic.go:334] "Generic (PLEG): container finished" podID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerID="01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a" exitCode=0 Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.667434 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.672472 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26cd9519-8d6a-4475-ac46-6b107621f27e","Type":"ContainerDied","Data":"01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a"} Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.672508 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"26cd9519-8d6a-4475-ac46-6b107621f27e","Type":"ContainerDied","Data":"a58b89a6b720194eb27b3ea7d3dae4136e9a9357868f4b4a6977ead2e780b201"} Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.672525 4712 scope.go:117] "RemoveContainer" containerID="01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.725654 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.739169 4712 scope.go:117] "RemoveContainer" containerID="8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.759005 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.769460 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:29 crc kubenswrapper[4712]: E0130 17:20:29.769961 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-log" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.769973 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-log" Jan 30 17:20:29 crc kubenswrapper[4712]: E0130 17:20:29.769980 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-api" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.769986 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-api" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.770174 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-api" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.770192 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" containerName="nova-api-log" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.771193 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.777174 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.777323 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.777378 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.778228 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.842300 4712 scope.go:117] "RemoveContainer" containerID="01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a" Jan 30 17:20:29 crc kubenswrapper[4712]: E0130 17:20:29.845859 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a\": container with ID starting with 01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a not found: ID does not exist" containerID="01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.845903 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a"} err="failed to get container status \"01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a\": rpc error: code = NotFound desc = could not find container \"01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a\": container with ID starting with 01cb8803cdcf74180021750306e169b80ca846d59bcde4d5777e4c4d507bc47a not found: ID does not exist" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.845927 4712 scope.go:117] "RemoveContainer" containerID="8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897" Jan 30 17:20:29 crc kubenswrapper[4712]: E0130 17:20:29.848948 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897\": container with ID starting with 8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897 not found: ID does not exist" containerID="8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.848974 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897"} err="failed to get container status \"8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897\": rpc error: code = NotFound desc = could not find container \"8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897\": container with ID starting with 8de553ae43b8d1e381074669405f0124d58b54735c723d94e0f42f436534c897 not found: ID does not exist" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.857315 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26cd9519-8d6a-4475-ac46-6b107621f27e" path="/var/lib/kubelet/pods/26cd9519-8d6a-4475-ac46-6b107621f27e/volumes" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.892764 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-tm2l4"] Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.917731 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.918224 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-config-data\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.918306 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4fb242-8e9a-4863-97d6-4de76d132964-logs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.918356 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.918391 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp497\" (UniqueName: \"kubernetes.io/projected/9b4fb242-8e9a-4863-97d6-4de76d132964-kube-api-access-cp497\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:29 crc kubenswrapper[4712]: I0130 17:20:29.918470 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.020479 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.020625 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-config-data\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.020698 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4fb242-8e9a-4863-97d6-4de76d132964-logs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.020725 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.020754 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp497\" (UniqueName: \"kubernetes.io/projected/9b4fb242-8e9a-4863-97d6-4de76d132964-kube-api-access-cp497\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.020854 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.021177 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4fb242-8e9a-4863-97d6-4de76d132964-logs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.026403 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.028280 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.033165 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-public-tls-certs\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.033705 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-config-data\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.047339 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp497\" (UniqueName: \"kubernetes.io/projected/9b4fb242-8e9a-4863-97d6-4de76d132964-kube-api-access-cp497\") pod \"nova-api-0\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.137173 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.689288 4712 generic.go:334] "Generic (PLEG): container finished" podID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerID="4bb83581a1d2614910ef4e8fab9322bef7192632a26d3a3d1f19583f9393eb18" exitCode=0 Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.689716 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerDied","Data":"4bb83581a1d2614910ef4e8fab9322bef7192632a26d3a3d1f19583f9393eb18"} Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.694550 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tm2l4" event={"ID":"07e1f6ad-a075-4777-a81a-d021d3b25b37","Type":"ContainerStarted","Data":"86f837b53bb45f244a56c9f0b76e59de96863d52e2babb4ea69db0df5bbb6e1c"} Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.694624 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tm2l4" event={"ID":"07e1f6ad-a075-4777-a81a-d021d3b25b37","Type":"ContainerStarted","Data":"495967ac7c656d55f0163bddef9548badc7d081b845ff65b98a79914246fd1b7"} Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.728249 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-tm2l4" podStartSLOduration=2.7282249050000003 podStartE2EDuration="2.728224905s" podCreationTimestamp="2026-01-30 17:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:30.723956111 +0000 UTC m=+1567.630965600" watchObservedRunningTime="2026-01-30 17:20:30.728224905 +0000 UTC m=+1567.635234364" Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.778640 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:30 crc kubenswrapper[4712]: I0130 17:20:30.945549 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.041762 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-sg-core-conf-yaml\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.042062 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-log-httpd\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.042185 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-combined-ca-bundle\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.042378 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s9dk\" (UniqueName: \"kubernetes.io/projected/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-kube-api-access-4s9dk\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.042521 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-scripts\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.042779 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-run-httpd\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.042975 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-config-data\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.043080 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-ceilometer-tls-certs\") pod \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\" (UID: \"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6\") " Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.043273 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.043286 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.049052 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-kube-api-access-4s9dk" (OuterVolumeSpecName: "kube-api-access-4s9dk") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "kube-api-access-4s9dk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.049431 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-scripts" (OuterVolumeSpecName: "scripts") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.088043 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.116996 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.150904 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.150941 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.150956 4712 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.150969 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.150982 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.150995 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s9dk\" (UniqueName: \"kubernetes.io/projected/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-kube-api-access-4s9dk\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.151094 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.180125 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-config-data" (OuterVolumeSpecName: "config-data") pod "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" (UID: "405dc96e-1f7f-4707-8fa1-bcada3bdf2f6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.253222 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.253365 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.712908 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b4fb242-8e9a-4863-97d6-4de76d132964","Type":"ContainerStarted","Data":"1fc6316ade875bca7ba884d55d08298ffc715a9ede9ca6dec581b1b6533abac2"} Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.713158 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b4fb242-8e9a-4863-97d6-4de76d132964","Type":"ContainerStarted","Data":"df82836d120a963a281ee5368c1816463f82bc51ea2def2566f1354c548a3364"} Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.713169 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b4fb242-8e9a-4863-97d6-4de76d132964","Type":"ContainerStarted","Data":"a54f6d45154d62d69f7c00475ebb668fc3c1f2fdf9edf17b2eae2fd79382c6aa"} Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.718836 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"405dc96e-1f7f-4707-8fa1-bcada3bdf2f6","Type":"ContainerDied","Data":"3edcb034b1e95d8088e0b320ab6a736f70ad99e66e948b14e46b43903da74e7a"} Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.718895 4712 scope.go:117] "RemoveContainer" containerID="efa8f423c92b2bf581e646157f666a01e4f5bb6574a16246f4bd5ec0db475800" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.718852 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.750135 4712 scope.go:117] "RemoveContainer" containerID="4034f571d41bb51708666b45d8ddf1abd0be7fdbf7e3e8cc0a06d879ac3353d9" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.755150 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.755123368 podStartE2EDuration="2.755123368s" podCreationTimestamp="2026-01-30 17:20:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:31.735346079 +0000 UTC m=+1568.642355568" watchObservedRunningTime="2026-01-30 17:20:31.755123368 +0000 UTC m=+1568.662132837" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.771302 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.772047 4712 scope.go:117] "RemoveContainer" containerID="dc8c1657f0e6eb393fa5465fb2fcc5791ef27bc1037302e094a2392431da2adf" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.782295 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.801550 4712 scope.go:117] "RemoveContainer" containerID="4bb83581a1d2614910ef4e8fab9322bef7192632a26d3a3d1f19583f9393eb18" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.832171 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" path="/var/lib/kubelet/pods/405dc96e-1f7f-4707-8fa1-bcada3bdf2f6/volumes" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.836717 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:31 crc kubenswrapper[4712]: E0130 17:20:31.839537 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-notification-agent" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.839573 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-notification-agent" Jan 30 17:20:31 crc kubenswrapper[4712]: E0130 17:20:31.839585 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="proxy-httpd" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.839594 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="proxy-httpd" Jan 30 17:20:31 crc kubenswrapper[4712]: E0130 17:20:31.839614 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-central-agent" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.839623 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-central-agent" Jan 30 17:20:31 crc kubenswrapper[4712]: E0130 17:20:31.839650 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="sg-core" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.839661 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="sg-core" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.840136 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-notification-agent" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.840164 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="ceilometer-central-agent" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.840181 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="sg-core" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.840200 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="405dc96e-1f7f-4707-8fa1-bcada3bdf2f6" containerName="proxy-httpd" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.843914 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.844037 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.851256 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.851812 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.852170 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.967666 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9z5f\" (UniqueName: \"kubernetes.io/projected/776ccbe0-fd71-4c0d-877e-f0178e4c1262-kube-api-access-p9z5f\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.967749 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.967780 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-scripts\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.967813 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-config-data\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.967907 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.967968 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-run-httpd\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.968008 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-log-httpd\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:31 crc kubenswrapper[4712]: I0130 17:20:31.968025 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.069741 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9z5f\" (UniqueName: \"kubernetes.io/projected/776ccbe0-fd71-4c0d-877e-f0178e4c1262-kube-api-access-p9z5f\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.069847 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.069870 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-scripts\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.069886 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-config-data\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.069943 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.069980 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-run-httpd\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.070010 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-log-httpd\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.070024 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.071045 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-run-httpd\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.071099 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-log-httpd\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.075502 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.075666 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.077450 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-scripts\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.084083 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.084338 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-config-data\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.088561 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9z5f\" (UniqueName: \"kubernetes.io/projected/776ccbe0-fd71-4c0d-877e-f0178e4c1262-kube-api-access-p9z5f\") pod \"ceilometer-0\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.172932 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.654777 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 17:20:32 crc kubenswrapper[4712]: W0130 17:20:32.662989 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod776ccbe0_fd71_4c0d_877e_f0178e4c1262.slice/crio-70a7c822f558fc1e4c0d67cad600c35a67fdbeba5f78d3d89ab7684face1ed99 WatchSource:0}: Error finding container 70a7c822f558fc1e4c0d67cad600c35a67fdbeba5f78d3d89ab7684face1ed99: Status 404 returned error can't find the container with id 70a7c822f558fc1e4c0d67cad600c35a67fdbeba5f78d3d89ab7684face1ed99 Jan 30 17:20:32 crc kubenswrapper[4712]: I0130 17:20:32.730890 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerStarted","Data":"70a7c822f558fc1e4c0d67cad600c35a67fdbeba5f78d3d89ab7684face1ed99"} Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.085951 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.177014 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7b7cv"] Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.177629 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="dnsmasq-dns" containerID="cri-o://04defd45460f80104ff8b937c03637087d21d2c8420a9aead154b75962cc56d8" gracePeriod=10 Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.305397 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.210:5353: connect: connection refused" Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.762758 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerStarted","Data":"d7c2847e6873da314843f10f5a1edc47d102f60f3f89eab53cd78ef02a17e642"} Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.774230 4712 generic.go:334] "Generic (PLEG): container finished" podID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerID="04defd45460f80104ff8b937c03637087d21d2c8420a9aead154b75962cc56d8" exitCode=0 Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.774275 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" event={"ID":"6964fb1d-a7f1-4719-a748-14639d6a771c","Type":"ContainerDied","Data":"04defd45460f80104ff8b937c03637087d21d2c8420a9aead154b75962cc56d8"} Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.774317 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" event={"ID":"6964fb1d-a7f1-4719-a748-14639d6a771c","Type":"ContainerDied","Data":"0dad728da033c2ffbfea298eb5befbf47574bc0baff04df2cd839ef8a5060cd7"} Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.774328 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0dad728da033c2ffbfea298eb5befbf47574bc0baff04df2cd839ef8a5060cd7" Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.791236 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.954919 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-swift-storage-0\") pod \"6964fb1d-a7f1-4719-a748-14639d6a771c\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.955124 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-sb\") pod \"6964fb1d-a7f1-4719-a748-14639d6a771c\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.955220 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-svc\") pod \"6964fb1d-a7f1-4719-a748-14639d6a771c\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.955310 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-config\") pod \"6964fb1d-a7f1-4719-a748-14639d6a771c\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.955374 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2nsds\" (UniqueName: \"kubernetes.io/projected/6964fb1d-a7f1-4719-a748-14639d6a771c-kube-api-access-2nsds\") pod \"6964fb1d-a7f1-4719-a748-14639d6a771c\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.955422 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-nb\") pod \"6964fb1d-a7f1-4719-a748-14639d6a771c\" (UID: \"6964fb1d-a7f1-4719-a748-14639d6a771c\") " Jan 30 17:20:33 crc kubenswrapper[4712]: I0130 17:20:33.990969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6964fb1d-a7f1-4719-a748-14639d6a771c-kube-api-access-2nsds" (OuterVolumeSpecName: "kube-api-access-2nsds") pod "6964fb1d-a7f1-4719-a748-14639d6a771c" (UID: "6964fb1d-a7f1-4719-a748-14639d6a771c"). InnerVolumeSpecName "kube-api-access-2nsds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.039484 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "6964fb1d-a7f1-4719-a748-14639d6a771c" (UID: "6964fb1d-a7f1-4719-a748-14639d6a771c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.047994 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "6964fb1d-a7f1-4719-a748-14639d6a771c" (UID: "6964fb1d-a7f1-4719-a748-14639d6a771c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.059025 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.059058 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2nsds\" (UniqueName: \"kubernetes.io/projected/6964fb1d-a7f1-4719-a748-14639d6a771c-kube-api-access-2nsds\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.059068 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.065810 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-config" (OuterVolumeSpecName: "config") pod "6964fb1d-a7f1-4719-a748-14639d6a771c" (UID: "6964fb1d-a7f1-4719-a748-14639d6a771c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.072452 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "6964fb1d-a7f1-4719-a748-14639d6a771c" (UID: "6964fb1d-a7f1-4719-a748-14639d6a771c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.072647 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "6964fb1d-a7f1-4719-a748-14639d6a771c" (UID: "6964fb1d-a7f1-4719-a748-14639d6a771c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.161041 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.161067 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.161076 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6964fb1d-a7f1-4719-a748-14639d6a771c-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.786293 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerStarted","Data":"f55f13c0d18cd219a7583bffee8540f878e6bdf852ba9f3550b2b5613ac4c69f"} Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.786328 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-9b86998b5-7b7cv" Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.836870 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7b7cv"] Jan 30 17:20:34 crc kubenswrapper[4712]: I0130 17:20:34.844629 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-9b86998b5-7b7cv"] Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.257223 4712 scope.go:117] "RemoveContainer" containerID="be0fb1fbda6d0f9e95cae83778f43fa6053be8953acb630f5bfdf0b3314d29af" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.353297 4712 scope.go:117] "RemoveContainer" containerID="0b8da8be5294af16dc372027943eb73c6f0adbfba94a362c430a5c105cb8ce35" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.353318 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.353497 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.354772 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.427255 4712 scope.go:117] "RemoveContainer" containerID="d4d0184806d44cb107882cf97cfdd22f429f4ff19dc32d6419d8f4820d31d23f" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.812122 4712 generic.go:334] "Generic (PLEG): container finished" podID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerID="33da2560c2b92663910c7a5cee80606f93009c5b03eae1dcf70e4946299645fb" exitCode=137 Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.813519 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" path="/var/lib/kubelet/pods/6964fb1d-a7f1-4719-a748-14639d6a771c/volumes" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.814251 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"33da2560c2b92663910c7a5cee80606f93009c5b03eae1dcf70e4946299645fb"} Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.814360 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerStarted","Data":"4b3189dc0e5e95f56ff7a7ab4af993cd6a3c5a0280c5d94b9bafcc777d386ef8"} Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.814387 4712 scope.go:117] "RemoveContainer" containerID="7ea359681383c8315f1de54dfb90a6308c6bf781f9821a74bce1f1dbcac99cce" Jan 30 17:20:35 crc kubenswrapper[4712]: I0130 17:20:35.820310 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerStarted","Data":"eabf7bf98471e0c77ef14ff722d51aad209fee815776510511dbfcf2d5c658f0"} Jan 30 17:20:36 crc kubenswrapper[4712]: I0130 17:20:36.271419 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:20:36 crc kubenswrapper[4712]: I0130 17:20:36.271493 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:20:37 crc kubenswrapper[4712]: I0130 17:20:37.861458 4712 generic.go:334] "Generic (PLEG): container finished" podID="07e1f6ad-a075-4777-a81a-d021d3b25b37" containerID="86f837b53bb45f244a56c9f0b76e59de96863d52e2babb4ea69db0df5bbb6e1c" exitCode=0 Jan 30 17:20:37 crc kubenswrapper[4712]: I0130 17:20:37.861760 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tm2l4" event={"ID":"07e1f6ad-a075-4777-a81a-d021d3b25b37","Type":"ContainerDied","Data":"86f837b53bb45f244a56c9f0b76e59de96863d52e2babb4ea69db0df5bbb6e1c"} Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.434598 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.588149 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w294k\" (UniqueName: \"kubernetes.io/projected/07e1f6ad-a075-4777-a81a-d021d3b25b37-kube-api-access-w294k\") pod \"07e1f6ad-a075-4777-a81a-d021d3b25b37\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.588303 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-scripts\") pod \"07e1f6ad-a075-4777-a81a-d021d3b25b37\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.588348 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-combined-ca-bundle\") pod \"07e1f6ad-a075-4777-a81a-d021d3b25b37\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.588455 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-config-data\") pod \"07e1f6ad-a075-4777-a81a-d021d3b25b37\" (UID: \"07e1f6ad-a075-4777-a81a-d021d3b25b37\") " Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.616990 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e1f6ad-a075-4777-a81a-d021d3b25b37-kube-api-access-w294k" (OuterVolumeSpecName: "kube-api-access-w294k") pod "07e1f6ad-a075-4777-a81a-d021d3b25b37" (UID: "07e1f6ad-a075-4777-a81a-d021d3b25b37"). InnerVolumeSpecName "kube-api-access-w294k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.627016 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-scripts" (OuterVolumeSpecName: "scripts") pod "07e1f6ad-a075-4777-a81a-d021d3b25b37" (UID: "07e1f6ad-a075-4777-a81a-d021d3b25b37"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.648119 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "07e1f6ad-a075-4777-a81a-d021d3b25b37" (UID: "07e1f6ad-a075-4777-a81a-d021d3b25b37"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.648247 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-config-data" (OuterVolumeSpecName: "config-data") pod "07e1f6ad-a075-4777-a81a-d021d3b25b37" (UID: "07e1f6ad-a075-4777-a81a-d021d3b25b37"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.690742 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w294k\" (UniqueName: \"kubernetes.io/projected/07e1f6ad-a075-4777-a81a-d021d3b25b37-kube-api-access-w294k\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.690777 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.690805 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.690818 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07e1f6ad-a075-4777-a81a-d021d3b25b37-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.882866 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerStarted","Data":"281a470f955e4312b3cfb290e1593f67506ddd67b7553ed2dbf3fd11ddfab11a"} Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.883826 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.885557 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-tm2l4" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.885231 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-tm2l4" event={"ID":"07e1f6ad-a075-4777-a81a-d021d3b25b37","Type":"ContainerDied","Data":"495967ac7c656d55f0163bddef9548badc7d081b845ff65b98a79914246fd1b7"} Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.888836 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="495967ac7c656d55f0163bddef9548badc7d081b845ff65b98a79914246fd1b7" Jan 30 17:20:39 crc kubenswrapper[4712]: I0130 17:20:39.931524 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.363018611 podStartE2EDuration="8.931506366s" podCreationTimestamp="2026-01-30 17:20:31 +0000 UTC" firstStartedPulling="2026-01-30 17:20:32.677932674 +0000 UTC m=+1569.584942143" lastFinishedPulling="2026-01-30 17:20:39.246420429 +0000 UTC m=+1576.153429898" observedRunningTime="2026-01-30 17:20:39.912821375 +0000 UTC m=+1576.819830844" watchObservedRunningTime="2026-01-30 17:20:39.931506366 +0000 UTC m=+1576.838515835" Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.080511 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.081133 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-log" containerID="cri-o://df82836d120a963a281ee5368c1816463f82bc51ea2def2566f1354c548a3364" gracePeriod=30 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.081662 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-api" containerID="cri-o://1fc6316ade875bca7ba884d55d08298ffc715a9ede9ca6dec581b1b6533abac2" gracePeriod=30 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.118536 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.118876 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" containerName="nova-scheduler-scheduler" containerID="cri-o://6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" gracePeriod=30 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.192427 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.192647 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-log" containerID="cri-o://0630b320a5a3776024325401aeb687cdca25e5aa40f5315b83104471ff56069b" gracePeriod=30 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.193059 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-metadata" containerID="cri-o://7b009d3b03aa306ea07f88e85058719f6d6428984ab9604b838edc55f9caac0d" gracePeriod=30 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.911609 4712 generic.go:334] "Generic (PLEG): container finished" podID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerID="1fc6316ade875bca7ba884d55d08298ffc715a9ede9ca6dec581b1b6533abac2" exitCode=0 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.911640 4712 generic.go:334] "Generic (PLEG): container finished" podID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerID="df82836d120a963a281ee5368c1816463f82bc51ea2def2566f1354c548a3364" exitCode=143 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.911686 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b4fb242-8e9a-4863-97d6-4de76d132964","Type":"ContainerDied","Data":"1fc6316ade875bca7ba884d55d08298ffc715a9ede9ca6dec581b1b6533abac2"} Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.911712 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b4fb242-8e9a-4863-97d6-4de76d132964","Type":"ContainerDied","Data":"df82836d120a963a281ee5368c1816463f82bc51ea2def2566f1354c548a3364"} Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.921328 4712 generic.go:334] "Generic (PLEG): container finished" podID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerID="0630b320a5a3776024325401aeb687cdca25e5aa40f5315b83104471ff56069b" exitCode=143 Jan 30 17:20:40 crc kubenswrapper[4712]: I0130 17:20:40.921720 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8f0931-676e-406e-92fd-d6d09a065cf9","Type":"ContainerDied","Data":"0630b320a5a3776024325401aeb687cdca25e5aa40f5315b83104471ff56069b"} Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.279681 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.428892 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-config-data\") pod \"9b4fb242-8e9a-4863-97d6-4de76d132964\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.428974 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4fb242-8e9a-4863-97d6-4de76d132964-logs\") pod \"9b4fb242-8e9a-4863-97d6-4de76d132964\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.429015 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-combined-ca-bundle\") pod \"9b4fb242-8e9a-4863-97d6-4de76d132964\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.429112 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-internal-tls-certs\") pod \"9b4fb242-8e9a-4863-97d6-4de76d132964\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.429162 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-public-tls-certs\") pod \"9b4fb242-8e9a-4863-97d6-4de76d132964\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.429189 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp497\" (UniqueName: \"kubernetes.io/projected/9b4fb242-8e9a-4863-97d6-4de76d132964-kube-api-access-cp497\") pod \"9b4fb242-8e9a-4863-97d6-4de76d132964\" (UID: \"9b4fb242-8e9a-4863-97d6-4de76d132964\") " Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.429294 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b4fb242-8e9a-4863-97d6-4de76d132964-logs" (OuterVolumeSpecName: "logs") pod "9b4fb242-8e9a-4863-97d6-4de76d132964" (UID: "9b4fb242-8e9a-4863-97d6-4de76d132964"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.429653 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9b4fb242-8e9a-4863-97d6-4de76d132964-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.447967 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b4fb242-8e9a-4863-97d6-4de76d132964-kube-api-access-cp497" (OuterVolumeSpecName: "kube-api-access-cp497") pod "9b4fb242-8e9a-4863-97d6-4de76d132964" (UID: "9b4fb242-8e9a-4863-97d6-4de76d132964"). InnerVolumeSpecName "kube-api-access-cp497". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.463334 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9b4fb242-8e9a-4863-97d6-4de76d132964" (UID: "9b4fb242-8e9a-4863-97d6-4de76d132964"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.482915 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-config-data" (OuterVolumeSpecName: "config-data") pod "9b4fb242-8e9a-4863-97d6-4de76d132964" (UID: "9b4fb242-8e9a-4863-97d6-4de76d132964"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.505547 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "9b4fb242-8e9a-4863-97d6-4de76d132964" (UID: "9b4fb242-8e9a-4863-97d6-4de76d132964"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.506236 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "9b4fb242-8e9a-4863-97d6-4de76d132964" (UID: "9b4fb242-8e9a-4863-97d6-4de76d132964"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.531622 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.531668 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.531680 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.531691 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp497\" (UniqueName: \"kubernetes.io/projected/9b4fb242-8e9a-4863-97d6-4de76d132964-kube-api-access-cp497\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.531703 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9b4fb242-8e9a-4863-97d6-4de76d132964-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.934756 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9b4fb242-8e9a-4863-97d6-4de76d132964","Type":"ContainerDied","Data":"a54f6d45154d62d69f7c00475ebb668fc3c1f2fdf9edf17b2eae2fd79382c6aa"} Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.934813 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.934824 4712 scope.go:117] "RemoveContainer" containerID="1fc6316ade875bca7ba884d55d08298ffc715a9ede9ca6dec581b1b6533abac2" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.965148 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.969501 4712 scope.go:117] "RemoveContainer" containerID="df82836d120a963a281ee5368c1816463f82bc51ea2def2566f1354c548a3364" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.975186 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995127 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:41 crc kubenswrapper[4712]: E0130 17:20:41.995521 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-api" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995537 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-api" Jan 30 17:20:41 crc kubenswrapper[4712]: E0130 17:20:41.995551 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="init" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995557 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="init" Jan 30 17:20:41 crc kubenswrapper[4712]: E0130 17:20:41.995579 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e1f6ad-a075-4777-a81a-d021d3b25b37" containerName="nova-manage" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995587 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e1f6ad-a075-4777-a81a-d021d3b25b37" containerName="nova-manage" Jan 30 17:20:41 crc kubenswrapper[4712]: E0130 17:20:41.995602 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-log" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995608 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-log" Jan 30 17:20:41 crc kubenswrapper[4712]: E0130 17:20:41.995626 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="dnsmasq-dns" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995632 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="dnsmasq-dns" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995788 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e1f6ad-a075-4777-a81a-d021d3b25b37" containerName="nova-manage" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995815 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-api" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995826 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" containerName="nova-api-log" Jan 30 17:20:41 crc kubenswrapper[4712]: I0130 17:20:41.995839 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="6964fb1d-a7f1-4719-a748-14639d6a771c" containerName="dnsmasq-dns" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:41.997164 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.009266 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.014198 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.014400 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.014515 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.144627 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-public-tls-certs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.144703 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.144768 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz4zm\" (UniqueName: \"kubernetes.io/projected/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-kube-api-access-gz4zm\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.144871 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-config-data\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.145003 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.145061 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-logs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.248201 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.248567 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-logs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.248735 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-public-tls-certs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.248877 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.249013 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz4zm\" (UniqueName: \"kubernetes.io/projected/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-kube-api-access-gz4zm\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.249205 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-config-data\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.249023 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-logs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.252779 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-internal-tls-certs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.253264 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.253906 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-public-tls-certs\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.271242 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-config-data\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.282477 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz4zm\" (UniqueName: \"kubernetes.io/projected/fdc1ab7c-d592-4e45-8bbc-1ecc967bad26-kube-api-access-gz4zm\") pod \"nova-api-0\" (UID: \"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26\") " pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.315994 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 30 17:20:42 crc kubenswrapper[4712]: E0130 17:20:42.652223 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:20:42 crc kubenswrapper[4712]: E0130 17:20:42.654831 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:20:42 crc kubenswrapper[4712]: E0130 17:20:42.656835 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 30 17:20:42 crc kubenswrapper[4712]: E0130 17:20:42.656886 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" containerName="nova-scheduler-scheduler" Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.848385 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 30 17:20:42 crc kubenswrapper[4712]: I0130 17:20:42.949316 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26","Type":"ContainerStarted","Data":"8c256aeb60abc416cbf746f260e90ea07c377898b147693104919b4065721630"} Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.634566 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": read tcp 10.217.0.2:37836->10.217.0.213:8775: read: connection reset by peer" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.634607 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.213:8775/\": read tcp 10.217.0.2:37848->10.217.0.213:8775: read: connection reset by peer" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.762030 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.822958 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-combined-ca-bundle\") pod \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.823010 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvdjj\" (UniqueName: \"kubernetes.io/projected/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-kube-api-access-mvdjj\") pod \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.823109 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-config-data\") pod \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\" (UID: \"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db\") " Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.851488 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-kube-api-access-mvdjj" (OuterVolumeSpecName: "kube-api-access-mvdjj") pod "d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" (UID: "d6b1d7ef-cd70-40ba-a25a-7f80b07c16db"). InnerVolumeSpecName "kube-api-access-mvdjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.881700 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b4fb242-8e9a-4863-97d6-4de76d132964" path="/var/lib/kubelet/pods/9b4fb242-8e9a-4863-97d6-4de76d132964/volumes" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.926053 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" (UID: "d6b1d7ef-cd70-40ba-a25a-7f80b07c16db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.932770 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.932837 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvdjj\" (UniqueName: \"kubernetes.io/projected/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-kube-api-access-mvdjj\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.932990 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-config-data" (OuterVolumeSpecName: "config-data") pod "d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" (UID: "d6b1d7ef-cd70-40ba-a25a-7f80b07c16db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.996807 4712 generic.go:334] "Generic (PLEG): container finished" podID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" exitCode=0 Jan 30 17:20:43 crc kubenswrapper[4712]: I0130 17:20:43.996920 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.012950 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db","Type":"ContainerDied","Data":"6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff"} Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.013021 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d6b1d7ef-cd70-40ba-a25a-7f80b07c16db","Type":"ContainerDied","Data":"fdb37de2a6e7ef20a85dcbce111ea76b72b4b1bafadd8bb9d48245dde33819a5"} Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.013043 4712 scope.go:117] "RemoveContainer" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.016856 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26","Type":"ContainerStarted","Data":"3854d1556c0b0cec583164619ddf040fa8cdf9293b295970b41b8525f2521ae0"} Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.016903 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fdc1ab7c-d592-4e45-8bbc-1ecc967bad26","Type":"ContainerStarted","Data":"bb410f803a99f9ad610ba2ff135958560cc4d4961b0ce996d93b31c94d8f7e66"} Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.028042 4712 generic.go:334] "Generic (PLEG): container finished" podID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerID="7b009d3b03aa306ea07f88e85058719f6d6428984ab9604b838edc55f9caac0d" exitCode=0 Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.028359 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8f0931-676e-406e-92fd-d6d09a065cf9","Type":"ContainerDied","Data":"7b009d3b03aa306ea07f88e85058719f6d6428984ab9604b838edc55f9caac0d"} Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.034432 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.046986 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.04696529 podStartE2EDuration="3.04696529s" podCreationTimestamp="2026-01-30 17:20:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:44.040448163 +0000 UTC m=+1580.947457622" watchObservedRunningTime="2026-01-30 17:20:44.04696529 +0000 UTC m=+1580.953974759" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.081397 4712 scope.go:117] "RemoveContainer" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" Jan 30 17:20:44 crc kubenswrapper[4712]: E0130 17:20:44.082198 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff\": container with ID starting with 6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff not found: ID does not exist" containerID="6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.082309 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff"} err="failed to get container status \"6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff\": rpc error: code = NotFound desc = could not find container \"6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff\": container with ID starting with 6dc8db051c12cfb8daadc39e3ec213bb2eec4eaeb21c24374dd664e967c725ff not found: ID does not exist" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.103840 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.116875 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.145369 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:44 crc kubenswrapper[4712]: E0130 17:20:44.149309 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" containerName="nova-scheduler-scheduler" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.149427 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" containerName="nova-scheduler-scheduler" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.149767 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" containerName="nova-scheduler-scheduler" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.151078 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.153836 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.159198 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.165555 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.242486 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-nova-metadata-tls-certs\") pod \"3c8f0931-676e-406e-92fd-d6d09a065cf9\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.242557 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8f0931-676e-406e-92fd-d6d09a065cf9-logs\") pod \"3c8f0931-676e-406e-92fd-d6d09a065cf9\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.242586 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-combined-ca-bundle\") pod \"3c8f0931-676e-406e-92fd-d6d09a065cf9\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.242647 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-config-data\") pod \"3c8f0931-676e-406e-92fd-d6d09a065cf9\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.242689 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hn2xd\" (UniqueName: \"kubernetes.io/projected/3c8f0931-676e-406e-92fd-d6d09a065cf9-kube-api-access-hn2xd\") pod \"3c8f0931-676e-406e-92fd-d6d09a065cf9\" (UID: \"3c8f0931-676e-406e-92fd-d6d09a065cf9\") " Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.243020 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503aad53-052a-4eab-b8b9-ceb01fda3dc7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.243082 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503aad53-052a-4eab-b8b9-ceb01fda3dc7-config-data\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.243131 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292pt\" (UniqueName: \"kubernetes.io/projected/503aad53-052a-4eab-b8b9-ceb01fda3dc7-kube-api-access-292pt\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.244214 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3c8f0931-676e-406e-92fd-d6d09a065cf9-logs" (OuterVolumeSpecName: "logs") pod "3c8f0931-676e-406e-92fd-d6d09a065cf9" (UID: "3c8f0931-676e-406e-92fd-d6d09a065cf9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.246716 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c8f0931-676e-406e-92fd-d6d09a065cf9-kube-api-access-hn2xd" (OuterVolumeSpecName: "kube-api-access-hn2xd") pod "3c8f0931-676e-406e-92fd-d6d09a065cf9" (UID: "3c8f0931-676e-406e-92fd-d6d09a065cf9"). InnerVolumeSpecName "kube-api-access-hn2xd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.308106 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-config-data" (OuterVolumeSpecName: "config-data") pod "3c8f0931-676e-406e-92fd-d6d09a065cf9" (UID: "3c8f0931-676e-406e-92fd-d6d09a065cf9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.317505 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3c8f0931-676e-406e-92fd-d6d09a065cf9" (UID: "3c8f0931-676e-406e-92fd-d6d09a065cf9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344531 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503aad53-052a-4eab-b8b9-ceb01fda3dc7-config-data\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344629 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-292pt\" (UniqueName: \"kubernetes.io/projected/503aad53-052a-4eab-b8b9-ceb01fda3dc7-kube-api-access-292pt\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344775 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503aad53-052a-4eab-b8b9-ceb01fda3dc7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344881 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3c8f0931-676e-406e-92fd-d6d09a065cf9-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344898 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344912 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.344923 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hn2xd\" (UniqueName: \"kubernetes.io/projected/3c8f0931-676e-406e-92fd-d6d09a065cf9-kube-api-access-hn2xd\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.350411 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/503aad53-052a-4eab-b8b9-ceb01fda3dc7-config-data\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.353504 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/503aad53-052a-4eab-b8b9-ceb01fda3dc7-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.372668 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-292pt\" (UniqueName: \"kubernetes.io/projected/503aad53-052a-4eab-b8b9-ceb01fda3dc7-kube-api-access-292pt\") pod \"nova-scheduler-0\" (UID: \"503aad53-052a-4eab-b8b9-ceb01fda3dc7\") " pod="openstack/nova-scheduler-0" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.372880 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "3c8f0931-676e-406e-92fd-d6d09a065cf9" (UID: "3c8f0931-676e-406e-92fd-d6d09a065cf9"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.447985 4712 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/3c8f0931-676e-406e-92fd-d6d09a065cf9-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:20:44 crc kubenswrapper[4712]: I0130 17:20:44.488287 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.038719 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3c8f0931-676e-406e-92fd-d6d09a065cf9","Type":"ContainerDied","Data":"491bd1bf9d348930841048652e60d985c0c59150878fb31abe7c63c1b56283f4"} Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.039088 4712 scope.go:117] "RemoveContainer" containerID="7b009d3b03aa306ea07f88e85058719f6d6428984ab9604b838edc55f9caac0d" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.038926 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.076906 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.079626 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.079706 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.082320 4712 scope.go:117] "RemoveContainer" containerID="0630b320a5a3776024325401aeb687cdca25e5aa40f5315b83104471ff56069b" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.087953 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.115880 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.131861 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:20:45 crc kubenswrapper[4712]: E0130 17:20:45.132332 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-metadata" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.132352 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-metadata" Jan 30 17:20:45 crc kubenswrapper[4712]: E0130 17:20:45.132392 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-log" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.132400 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-log" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.132653 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-metadata" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.132670 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" containerName="nova-metadata-log" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.134052 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.138905 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.141209 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.160723 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.178862 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.178964 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9df1a77-0933-4439-9ee1-a3f4414eca71-logs\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.179034 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.179111 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhlz7\" (UniqueName: \"kubernetes.io/projected/c9df1a77-0933-4439-9ee1-a3f4414eca71-kube-api-access-rhlz7\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.179161 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-config-data\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: W0130 17:20:45.198446 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod503aad53_052a_4eab_b8b9_ceb01fda3dc7.slice/crio-776af70269a3866116d69fccdfbc49d1b04d8f1018dee865b822fb21121bb282 WatchSource:0}: Error finding container 776af70269a3866116d69fccdfbc49d1b04d8f1018dee865b822fb21121bb282: Status 404 returned error can't find the container with id 776af70269a3866116d69fccdfbc49d1b04d8f1018dee865b822fb21121bb282 Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.198520 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.282027 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.282162 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhlz7\" (UniqueName: \"kubernetes.io/projected/c9df1a77-0933-4439-9ee1-a3f4414eca71-kube-api-access-rhlz7\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.282219 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-config-data\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.282292 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.282370 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9df1a77-0933-4439-9ee1-a3f4414eca71-logs\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.282910 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c9df1a77-0933-4439-9ee1-a3f4414eca71-logs\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.289370 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.291527 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-config-data\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.293295 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/c9df1a77-0933-4439-9ee1-a3f4414eca71-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.299060 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhlz7\" (UniqueName: \"kubernetes.io/projected/c9df1a77-0933-4439-9ee1-a3f4414eca71-kube-api-access-rhlz7\") pod \"nova-metadata-0\" (UID: \"c9df1a77-0933-4439-9ee1-a3f4414eca71\") " pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.353510 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-64655dbc44-pvj2c" podUID="6a28b495-ecf0-409e-9558-ee794a46dbd1" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.156:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.156:8443: connect: connection refused" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.464353 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.598309 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7np8f"] Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.601486 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.632753 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7np8f"] Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.705025 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-utilities\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.705114 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-catalog-content\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.705254 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5r2s\" (UniqueName: \"kubernetes.io/projected/5f3a4682-2cab-4d13-99c5-04ff2a844831-kube-api-access-f5r2s\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.806841 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-utilities\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.806915 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-catalog-content\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.806986 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5r2s\" (UniqueName: \"kubernetes.io/projected/5f3a4682-2cab-4d13-99c5-04ff2a844831-kube-api-access-f5r2s\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.807958 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-catalog-content\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.810716 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-utilities\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.825894 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c8f0931-676e-406e-92fd-d6d09a065cf9" path="/var/lib/kubelet/pods/3c8f0931-676e-406e-92fd-d6d09a065cf9/volumes" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.826579 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5r2s\" (UniqueName: \"kubernetes.io/projected/5f3a4682-2cab-4d13-99c5-04ff2a844831-kube-api-access-f5r2s\") pod \"redhat-marketplace-7np8f\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.827278 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6b1d7ef-cd70-40ba-a25a-7f80b07c16db" path="/var/lib/kubelet/pods/d6b1d7ef-cd70-40ba-a25a-7f80b07c16db/volumes" Jan 30 17:20:45 crc kubenswrapper[4712]: I0130 17:20:45.931086 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:46 crc kubenswrapper[4712]: I0130 17:20:46.070252 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"503aad53-052a-4eab-b8b9-ceb01fda3dc7","Type":"ContainerStarted","Data":"2b332f9596e498f59ab82fc033d94530c223aa1ab6f11812e30b6685d4040a47"} Jan 30 17:20:46 crc kubenswrapper[4712]: I0130 17:20:46.070303 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"503aad53-052a-4eab-b8b9-ceb01fda3dc7","Type":"ContainerStarted","Data":"776af70269a3866116d69fccdfbc49d1b04d8f1018dee865b822fb21121bb282"} Jan 30 17:20:46 crc kubenswrapper[4712]: I0130 17:20:46.093661 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.093640215 podStartE2EDuration="2.093640215s" podCreationTimestamp="2026-01-30 17:20:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:46.090679963 +0000 UTC m=+1582.997689432" watchObservedRunningTime="2026-01-30 17:20:46.093640215 +0000 UTC m=+1583.000649684" Jan 30 17:20:46 crc kubenswrapper[4712]: I0130 17:20:46.153754 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 30 17:20:46 crc kubenswrapper[4712]: I0130 17:20:46.492401 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7np8f"] Jan 30 17:20:46 crc kubenswrapper[4712]: W0130 17:20:46.492711 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f3a4682_2cab_4d13_99c5_04ff2a844831.slice/crio-f359ab4e26b09699cc8d43acbbb58238bd2f23674f5c159a1a63854d96608a6b WatchSource:0}: Error finding container f359ab4e26b09699cc8d43acbbb58238bd2f23674f5c159a1a63854d96608a6b: Status 404 returned error can't find the container with id f359ab4e26b09699cc8d43acbbb58238bd2f23674f5c159a1a63854d96608a6b Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.083619 4712 generic.go:334] "Generic (PLEG): container finished" podID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerID="3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad" exitCode=0 Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.083656 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerDied","Data":"3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad"} Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.084737 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerStarted","Data":"f359ab4e26b09699cc8d43acbbb58238bd2f23674f5c159a1a63854d96608a6b"} Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.088910 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9df1a77-0933-4439-9ee1-a3f4414eca71","Type":"ContainerStarted","Data":"612ff168277049b0a39d3560bfc7838718f9cd54037a7084d969b36bd1e46fc8"} Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.088948 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9df1a77-0933-4439-9ee1-a3f4414eca71","Type":"ContainerStarted","Data":"8e0be5d0d6646a929fae41ee4dce962aa6eaddb8f2a829a4d30917394711a24a"} Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.088958 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c9df1a77-0933-4439-9ee1-a3f4414eca71","Type":"ContainerStarted","Data":"351627db33ffc60ad1fc602bbaf396c8e90b0a58d1867b4a8df12e334d659ad5"} Jan 30 17:20:47 crc kubenswrapper[4712]: I0130 17:20:47.142587 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.142572211 podStartE2EDuration="2.142572211s" podCreationTimestamp="2026-01-30 17:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:20:47.138128553 +0000 UTC m=+1584.045138032" watchObservedRunningTime="2026-01-30 17:20:47.142572211 +0000 UTC m=+1584.049581680" Jan 30 17:20:49 crc kubenswrapper[4712]: I0130 17:20:49.109297 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerStarted","Data":"8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945"} Jan 30 17:20:49 crc kubenswrapper[4712]: I0130 17:20:49.489667 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 30 17:20:50 crc kubenswrapper[4712]: I0130 17:20:50.465685 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:20:50 crc kubenswrapper[4712]: I0130 17:20:50.466081 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 30 17:20:52 crc kubenswrapper[4712]: I0130 17:20:52.138024 4712 generic.go:334] "Generic (PLEG): container finished" podID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerID="8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945" exitCode=0 Jan 30 17:20:52 crc kubenswrapper[4712]: I0130 17:20:52.138094 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerDied","Data":"8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945"} Jan 30 17:20:52 crc kubenswrapper[4712]: I0130 17:20:52.316470 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:20:52 crc kubenswrapper[4712]: I0130 17:20:52.316524 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 30 17:20:53 crc kubenswrapper[4712]: I0130 17:20:53.158745 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerStarted","Data":"1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f"} Jan 30 17:20:53 crc kubenswrapper[4712]: I0130 17:20:53.193947 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7np8f" podStartSLOduration=2.5329064900000002 podStartE2EDuration="8.19392855s" podCreationTimestamp="2026-01-30 17:20:45 +0000 UTC" firstStartedPulling="2026-01-30 17:20:47.085490141 +0000 UTC m=+1583.992499610" lastFinishedPulling="2026-01-30 17:20:52.746512201 +0000 UTC m=+1589.653521670" observedRunningTime="2026-01-30 17:20:53.18232749 +0000 UTC m=+1590.089336969" watchObservedRunningTime="2026-01-30 17:20:53.19392855 +0000 UTC m=+1590.100938019" Jan 30 17:20:53 crc kubenswrapper[4712]: I0130 17:20:53.333008 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fdc1ab7c-d592-4e45-8bbc-1ecc967bad26" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:53 crc kubenswrapper[4712]: I0130 17:20:53.333012 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fdc1ab7c-d592-4e45-8bbc-1ecc967bad26" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.224:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:54 crc kubenswrapper[4712]: I0130 17:20:54.489454 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 30 17:20:54 crc kubenswrapper[4712]: I0130 17:20:54.523635 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.072780 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.155:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.155:8443: connect: connection refused" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.208358 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.465387 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.465427 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.931887 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.932216 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:55 crc kubenswrapper[4712]: I0130 17:20:55.998820 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:20:56 crc kubenswrapper[4712]: I0130 17:20:56.480052 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c9df1a77-0933-4439-9ee1-a3f4414eca71" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:56 crc kubenswrapper[4712]: I0130 17:20:56.480052 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="c9df1a77-0933-4439-9ee1-a3f4414eca71" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.226:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:20:57 crc kubenswrapper[4712]: I0130 17:20:57.897194 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:20:59 crc kubenswrapper[4712]: I0130 17:20:59.855759 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-64655dbc44-pvj2c" Jan 30 17:20:59 crc kubenswrapper[4712]: I0130 17:20:59.955788 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56f8b66d48-7wr47"] Jan 30 17:20:59 crc kubenswrapper[4712]: I0130 17:20:59.956012 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon-log" containerID="cri-o://e7f65e9725996b5430c165272394642af4b0191e34340a9577ad618356814e4b" gracePeriod=30 Jan 30 17:20:59 crc kubenswrapper[4712]: I0130 17:20:59.956392 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-56f8b66d48-7wr47" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" containerID="cri-o://4b3189dc0e5e95f56ff7a7ab4af993cd6a3c5a0280c5d94b9bafcc777d386ef8" gracePeriod=30 Jan 30 17:21:02 crc kubenswrapper[4712]: I0130 17:21:02.192985 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 17:21:02 crc kubenswrapper[4712]: I0130 17:21:02.325365 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:21:02 crc kubenswrapper[4712]: I0130 17:21:02.325962 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:21:02 crc kubenswrapper[4712]: I0130 17:21:02.327091 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 30 17:21:02 crc kubenswrapper[4712]: I0130 17:21:02.334074 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:21:03 crc kubenswrapper[4712]: I0130 17:21:03.260002 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 30 17:21:03 crc kubenswrapper[4712]: I0130 17:21:03.266536 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 30 17:21:05 crc kubenswrapper[4712]: I0130 17:21:05.481281 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:21:05 crc kubenswrapper[4712]: I0130 17:21:05.486951 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:21:05 crc kubenswrapper[4712]: I0130 17:21:05.487571 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 30 17:21:05 crc kubenswrapper[4712]: I0130 17:21:05.985629 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.063112 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7np8f"] Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.271260 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.271343 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.271397 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.272298 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.272376 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" gracePeriod=600 Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.285375 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7np8f" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="registry-server" containerID="cri-o://1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f" gracePeriod=2 Jan 30 17:21:06 crc kubenswrapper[4712]: I0130 17:21:06.293574 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 30 17:21:06 crc kubenswrapper[4712]: E0130 17:21:06.517876 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.035016 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.191224 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-catalog-content\") pod \"5f3a4682-2cab-4d13-99c5-04ff2a844831\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.191393 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-utilities\") pod \"5f3a4682-2cab-4d13-99c5-04ff2a844831\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.191555 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5r2s\" (UniqueName: \"kubernetes.io/projected/5f3a4682-2cab-4d13-99c5-04ff2a844831-kube-api-access-f5r2s\") pod \"5f3a4682-2cab-4d13-99c5-04ff2a844831\" (UID: \"5f3a4682-2cab-4d13-99c5-04ff2a844831\") " Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.192576 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-utilities" (OuterVolumeSpecName: "utilities") pod "5f3a4682-2cab-4d13-99c5-04ff2a844831" (UID: "5f3a4682-2cab-4d13-99c5-04ff2a844831"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.208722 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f3a4682-2cab-4d13-99c5-04ff2a844831-kube-api-access-f5r2s" (OuterVolumeSpecName: "kube-api-access-f5r2s") pod "5f3a4682-2cab-4d13-99c5-04ff2a844831" (UID: "5f3a4682-2cab-4d13-99c5-04ff2a844831"). InnerVolumeSpecName "kube-api-access-f5r2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.223827 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f3a4682-2cab-4d13-99c5-04ff2a844831" (UID: "5f3a4682-2cab-4d13-99c5-04ff2a844831"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.293883 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.294220 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5r2s\" (UniqueName: \"kubernetes.io/projected/5f3a4682-2cab-4d13-99c5-04ff2a844831-kube-api-access-f5r2s\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.294231 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f3a4682-2cab-4d13-99c5-04ff2a844831-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.296520 4712 generic.go:334] "Generic (PLEG): container finished" podID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerID="1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f" exitCode=0 Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.296580 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerDied","Data":"1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f"} Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.296610 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7np8f" event={"ID":"5f3a4682-2cab-4d13-99c5-04ff2a844831","Type":"ContainerDied","Data":"f359ab4e26b09699cc8d43acbbb58238bd2f23674f5c159a1a63854d96608a6b"} Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.296629 4712 scope.go:117] "RemoveContainer" containerID="1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.296751 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7np8f" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.303848 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" exitCode=0 Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.304850 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330"} Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.305147 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:21:07 crc kubenswrapper[4712]: E0130 17:21:07.305356 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.338160 4712 scope.go:117] "RemoveContainer" containerID="8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.373300 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7np8f"] Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.412728 4712 scope.go:117] "RemoveContainer" containerID="3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.462214 4712 scope.go:117] "RemoveContainer" containerID="1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f" Jan 30 17:21:07 crc kubenswrapper[4712]: E0130 17:21:07.462909 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f\": container with ID starting with 1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f not found: ID does not exist" containerID="1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.463047 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f"} err="failed to get container status \"1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f\": rpc error: code = NotFound desc = could not find container \"1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f\": container with ID starting with 1c21fe3222d92c069f490f4e9c92ba00f9f4b35d85c429e4d38acbd4f73d4f9f not found: ID does not exist" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.463127 4712 scope.go:117] "RemoveContainer" containerID="8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945" Jan 30 17:21:07 crc kubenswrapper[4712]: E0130 17:21:07.463611 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945\": container with ID starting with 8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945 not found: ID does not exist" containerID="8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.463652 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945"} err="failed to get container status \"8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945\": rpc error: code = NotFound desc = could not find container \"8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945\": container with ID starting with 8484f3be742153102e191796c358db66ed03a51ffac48b2692b963a118dfb945 not found: ID does not exist" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.463686 4712 scope.go:117] "RemoveContainer" containerID="3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad" Jan 30 17:21:07 crc kubenswrapper[4712]: E0130 17:21:07.464120 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad\": container with ID starting with 3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad not found: ID does not exist" containerID="3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.464153 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad"} err="failed to get container status \"3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad\": rpc error: code = NotFound desc = could not find container \"3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad\": container with ID starting with 3f47ee6d771c88b9d7402277fff6d4088fc30aa8e2434cdc6e2f92950a1e0dad not found: ID does not exist" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.464176 4712 scope.go:117] "RemoveContainer" containerID="2b2080500e3e21108518c785b6a9d42dc4c1501c9ea170a8ffe8ca230910ec5c" Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.470875 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7np8f"] Jan 30 17:21:07 crc kubenswrapper[4712]: I0130 17:21:07.820024 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" path="/var/lib/kubelet/pods/5f3a4682-2cab-4d13-99c5-04ff2a844831/volumes" Jan 30 17:21:15 crc kubenswrapper[4712]: I0130 17:21:15.234559 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:21:16 crc kubenswrapper[4712]: I0130 17:21:16.005288 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.109106 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nlns9"] Jan 30 17:21:17 crc kubenswrapper[4712]: E0130 17:21:17.109736 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="extract-utilities" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.109749 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="extract-utilities" Jan 30 17:21:17 crc kubenswrapper[4712]: E0130 17:21:17.109759 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="extract-content" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.109766 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="extract-content" Jan 30 17:21:17 crc kubenswrapper[4712]: E0130 17:21:17.109777 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="registry-server" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.109783 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="registry-server" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.109992 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f3a4682-2cab-4d13-99c5-04ff2a844831" containerName="registry-server" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.111353 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.152278 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlns9"] Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.212623 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-catalog-content\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.212669 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkk7s\" (UniqueName: \"kubernetes.io/projected/9be8718d-8f88-4536-a5d3-29cdd21959b9-kube-api-access-dkk7s\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.212927 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-utilities\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.314136 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-utilities\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.314251 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-catalog-content\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.314281 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkk7s\" (UniqueName: \"kubernetes.io/projected/9be8718d-8f88-4536-a5d3-29cdd21959b9-kube-api-access-dkk7s\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.315083 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-utilities\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.315238 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-catalog-content\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.339151 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkk7s\" (UniqueName: \"kubernetes.io/projected/9be8718d-8f88-4536-a5d3-29cdd21959b9-kube-api-access-dkk7s\") pod \"community-operators-nlns9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:17 crc kubenswrapper[4712]: I0130 17:21:17.436477 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:18 crc kubenswrapper[4712]: I0130 17:21:18.067084 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nlns9"] Jan 30 17:21:18 crc kubenswrapper[4712]: I0130 17:21:18.401539 4712 generic.go:334] "Generic (PLEG): container finished" podID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerID="03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d" exitCode=0 Jan 30 17:21:18 crc kubenswrapper[4712]: I0130 17:21:18.401580 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerDied","Data":"03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d"} Jan 30 17:21:18 crc kubenswrapper[4712]: I0130 17:21:18.401887 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerStarted","Data":"30464b5d53e7578415eaa4f7c2f4551c5489f5578a1bba3432d17e4713d51269"} Jan 30 17:21:20 crc kubenswrapper[4712]: I0130 17:21:20.574827 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" containerID="cri-o://ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042" gracePeriod=604795 Jan 30 17:21:21 crc kubenswrapper[4712]: I0130 17:21:21.450527 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerStarted","Data":"0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061"} Jan 30 17:21:21 crc kubenswrapper[4712]: I0130 17:21:21.690624 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" containerID="cri-o://30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77" gracePeriod=604795 Jan 30 17:21:21 crc kubenswrapper[4712]: I0130 17:21:21.800493 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:21:21 crc kubenswrapper[4712]: E0130 17:21:21.801123 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:21:22 crc kubenswrapper[4712]: I0130 17:21:22.461526 4712 generic.go:334] "Generic (PLEG): container finished" podID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerID="0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061" exitCode=0 Jan 30 17:21:22 crc kubenswrapper[4712]: I0130 17:21:22.461554 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerDied","Data":"0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061"} Jan 30 17:21:23 crc kubenswrapper[4712]: I0130 17:21:23.474674 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerStarted","Data":"458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb"} Jan 30 17:21:23 crc kubenswrapper[4712]: I0130 17:21:23.503026 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nlns9" podStartSLOduration=1.802375705 podStartE2EDuration="6.50300143s" podCreationTimestamp="2026-01-30 17:21:17 +0000 UTC" firstStartedPulling="2026-01-30 17:21:18.403869829 +0000 UTC m=+1615.310879298" lastFinishedPulling="2026-01-30 17:21:23.104495554 +0000 UTC m=+1620.011505023" observedRunningTime="2026-01-30 17:21:23.499085616 +0000 UTC m=+1620.406095095" watchObservedRunningTime="2026-01-30 17:21:23.50300143 +0000 UTC m=+1620.410010899" Jan 30 17:21:25 crc kubenswrapper[4712]: I0130 17:21:25.820000 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.98:5671: connect: connection refused" Jan 30 17:21:26 crc kubenswrapper[4712]: I0130 17:21:26.220222 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.99:5671: connect: connection refused" Jan 30 17:21:27 crc kubenswrapper[4712]: I0130 17:21:27.437274 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:27 crc kubenswrapper[4712]: I0130 17:21:27.438055 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:27 crc kubenswrapper[4712]: I0130 17:21:27.516438 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.360572 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.380653 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kwkbg\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-kube-api-access-kwkbg\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.380702 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-confd\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.397133 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-kube-api-access-kwkbg" (OuterVolumeSpecName: "kube-api-access-kwkbg") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "kube-api-access-kwkbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.486881 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-tls\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.486943 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-server-conf\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.486962 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-plugins-conf\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.486981 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01b5b85b-caea-4f70-a61f-875ed30f9e64-erlang-cookie-secret\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.509111 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-plugins\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.509242 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.509357 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-erlang-cookie\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.509471 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01b5b85b-caea-4f70-a61f-875ed30f9e64-pod-info\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.509619 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-config-data\") pod \"01b5b85b-caea-4f70-a61f-875ed30f9e64\" (UID: \"01b5b85b-caea-4f70-a61f-875ed30f9e64\") " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.511282 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.511854 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.513887 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01b5b85b-caea-4f70-a61f-875ed30f9e64-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.515604 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.516625 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kwkbg\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-kube-api-access-kwkbg\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.516737 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.516985 4712 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01b5b85b-caea-4f70-a61f-875ed30f9e64-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.517071 4712 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.517159 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.517248 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.531694 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/01b5b85b-caea-4f70-a61f-875ed30f9e64-pod-info" (OuterVolumeSpecName: "pod-info") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.540217 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "persistence") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.567986 4712 generic.go:334] "Generic (PLEG): container finished" podID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerID="ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042" exitCode=0 Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.568026 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01b5b85b-caea-4f70-a61f-875ed30f9e64","Type":"ContainerDied","Data":"ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042"} Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.568051 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01b5b85b-caea-4f70-a61f-875ed30f9e64","Type":"ContainerDied","Data":"21010972aa5303b9a366e69c6e6e1728053fded5bbf267f87481311791f0248d"} Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.568067 4712 scope.go:117] "RemoveContainer" containerID="ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.568323 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.575264 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-config-data" (OuterVolumeSpecName: "config-data") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.618835 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.618875 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.618884 4712 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01b5b85b-caea-4f70-a61f-875ed30f9e64-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.618892 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.650040 4712 scope.go:117] "RemoveContainer" containerID="2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.650915 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.661097 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-server-conf" (OuterVolumeSpecName: "server-conf") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.666138 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "01b5b85b-caea-4f70-a61f-875ed30f9e64" (UID: "01b5b85b-caea-4f70-a61f-875ed30f9e64"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.696956 4712 scope.go:117] "RemoveContainer" containerID="ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042" Jan 30 17:21:28 crc kubenswrapper[4712]: E0130 17:21:28.697384 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042\": container with ID starting with ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042 not found: ID does not exist" containerID="ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.697438 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042"} err="failed to get container status \"ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042\": rpc error: code = NotFound desc = could not find container \"ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042\": container with ID starting with ee45677930b012a8b24aca70da595e9ecab6ea6d65563bcf3b42bf277ddc1042 not found: ID does not exist" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.697470 4712 scope.go:117] "RemoveContainer" containerID="2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935" Jan 30 17:21:28 crc kubenswrapper[4712]: E0130 17:21:28.701312 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935\": container with ID starting with 2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935 not found: ID does not exist" containerID="2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.701346 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935"} err="failed to get container status \"2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935\": rpc error: code = NotFound desc = could not find container \"2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935\": container with ID starting with 2c33cef250b494d1f9745250b3e4f91a559a0867e0967b581569893e497b3935 not found: ID does not exist" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.720348 4712 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01b5b85b-caea-4f70-a61f-875ed30f9e64-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.720375 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.720384 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01b5b85b-caea-4f70-a61f-875ed30f9e64-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.899648 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.913128 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.953647 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:21:28 crc kubenswrapper[4712]: E0130 17:21:28.966035 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.966061 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" Jan 30 17:21:28 crc kubenswrapper[4712]: E0130 17:21:28.966094 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="setup-container" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.966102 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="setup-container" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.966265 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" containerName="rabbitmq" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.967246 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.974140 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.974869 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.975210 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.975401 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-hdm8z" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.975515 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.975645 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.975788 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 30 17:21:28 crc kubenswrapper[4712]: I0130 17:21:28.976363 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.128634 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3dfaa353-4f23-4dab-a7c5-6156924b9350-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.128925 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-config-data\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.128952 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cqpp\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-kube-api-access-4cqpp\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.128979 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129011 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129048 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129069 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129267 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129308 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3dfaa353-4f23-4dab-a7c5-6156924b9350-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129327 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.129351 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233568 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-config-data\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233611 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cqpp\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-kube-api-access-4cqpp\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233639 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233655 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233696 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233714 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233744 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233775 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3dfaa353-4f23-4dab-a7c5-6156924b9350-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233819 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233842 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.233892 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3dfaa353-4f23-4dab-a7c5-6156924b9350-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.235321 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.242002 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3dfaa353-4f23-4dab-a7c5-6156924b9350-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.242878 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-config-data\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.242991 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.246373 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.246938 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3dfaa353-4f23-4dab-a7c5-6156924b9350-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.247870 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.248526 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.251493 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.254431 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3dfaa353-4f23-4dab-a7c5-6156924b9350-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.257071 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cqpp\" (UniqueName: \"kubernetes.io/projected/3dfaa353-4f23-4dab-a7c5-6156924b9350-kube-api-access-4cqpp\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.363138 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"rabbitmq-server-0\" (UID: \"3dfaa353-4f23-4dab-a7c5-6156924b9350\") " pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.366598 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.564414 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659214 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-tls\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659354 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-erlang-cookie\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659389 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vl69k\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-kube-api-access-vl69k\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659424 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d5b67399-3a53-4694-8f1c-c04592426dcd-pod-info\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659452 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-plugins-conf\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659528 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-config-data\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659607 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659702 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-confd\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659727 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d5b67399-3a53-4694-8f1c-c04592426dcd-erlang-cookie-secret\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659774 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-server-conf\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.659827 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-plugins\") pod \"d5b67399-3a53-4694-8f1c-c04592426dcd\" (UID: \"d5b67399-3a53-4694-8f1c-c04592426dcd\") " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.661006 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.661183 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.665706 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.669378 4712 generic.go:334] "Generic (PLEG): container finished" podID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerID="30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77" exitCode=0 Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.669417 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d5b67399-3a53-4694-8f1c-c04592426dcd","Type":"ContainerDied","Data":"30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77"} Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.669437 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"d5b67399-3a53-4694-8f1c-c04592426dcd","Type":"ContainerDied","Data":"dc3b4d3cd874796ccf961be5cb1179023d612a40c798e1eea1488a66b4d39742"} Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.669455 4712 scope.go:117] "RemoveContainer" containerID="30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.669579 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.681108 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.689764 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "persistence") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.697533 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-kube-api-access-vl69k" (OuterVolumeSpecName: "kube-api-access-vl69k") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "kube-api-access-vl69k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.698162 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5b67399-3a53-4694-8f1c-c04592426dcd-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.701894 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/d5b67399-3a53-4694-8f1c-c04592426dcd-pod-info" (OuterVolumeSpecName: "pod-info") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.752519 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-config-data" (OuterVolumeSpecName: "config-data") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762110 4712 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/d5b67399-3a53-4694-8f1c-c04592426dcd-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762131 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762141 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762149 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762158 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vl69k\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-kube-api-access-vl69k\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762166 4712 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/d5b67399-3a53-4694-8f1c-c04592426dcd-pod-info\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762175 4712 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762182 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.762208 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.792090 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.823226 4712 scope.go:117] "RemoveContainer" containerID="a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.836923 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01b5b85b-caea-4f70-a61f-875ed30f9e64" path="/var/lib/kubelet/pods/01b5b85b-caea-4f70-a61f-875ed30f9e64/volumes" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.865370 4712 scope.go:117] "RemoveContainer" containerID="30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.865582 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: E0130 17:21:29.865898 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77\": container with ID starting with 30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77 not found: ID does not exist" containerID="30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.865931 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77"} err="failed to get container status \"30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77\": rpc error: code = NotFound desc = could not find container \"30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77\": container with ID starting with 30a870e41b1135bc49ebd6559cdc528cc6a15945f64888aa69b8f30394d40c77 not found: ID does not exist" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.865949 4712 scope.go:117] "RemoveContainer" containerID="a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c" Jan 30 17:21:29 crc kubenswrapper[4712]: E0130 17:21:29.866181 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c\": container with ID starting with a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c not found: ID does not exist" containerID="a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.866216 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c"} err="failed to get container status \"a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c\": rpc error: code = NotFound desc = could not find container \"a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c\": container with ID starting with a54f2f1b1572ac7848902c6c2afb8f7c794bf2545a7e8d5ffe8bb69d2425625c not found: ID does not exist" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.868740 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-server-conf" (OuterVolumeSpecName: "server-conf") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.941327 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "d5b67399-3a53-4694-8f1c-c04592426dcd" (UID: "d5b67399-3a53-4694-8f1c-c04592426dcd"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.967676 4712 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/d5b67399-3a53-4694-8f1c-c04592426dcd-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:29 crc kubenswrapper[4712]: I0130 17:21:29.967712 4712 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/d5b67399-3a53-4694-8f1c-c04592426dcd-server-conf\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.016750 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.046485 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.072301 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:21:30 crc kubenswrapper[4712]: E0130 17:21:30.072714 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="setup-container" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.072731 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="setup-container" Jan 30 17:21:30 crc kubenswrapper[4712]: E0130 17:21:30.072819 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.072827 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.073016 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" containerName="rabbitmq" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.073984 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.088057 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.090975 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.091181 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.091371 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.091422 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.091560 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.091661 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-rj892" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.092168 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.115110 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.171999 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czff6\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-kube-api-access-czff6\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172097 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172146 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7ee8a13-933e-462b-956a-0dae66b09f01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172173 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172245 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172263 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172331 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172384 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172408 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172482 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.172533 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7ee8a13-933e-462b-956a-0dae66b09f01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.257655 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vlrn2"] Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.259520 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.263855 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.278350 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vlrn2"] Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.285825 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.287328 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.287558 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7ee8a13-933e-462b-956a-0dae66b09f01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288205 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288342 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288408 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288545 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288652 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288729 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288892 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.288983 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7ee8a13-933e-462b-956a-0dae66b09f01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.289047 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czff6\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-kube-api-access-czff6\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.289168 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.290298 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.293197 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.294179 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.294581 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7ee8a13-933e-462b-956a-0dae66b09f01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.316506 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.316913 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7ee8a13-933e-462b-956a-0dae66b09f01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.328455 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7ee8a13-933e-462b-956a-0dae66b09f01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.328819 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.333560 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czff6\" (UniqueName: \"kubernetes.io/projected/f7ee8a13-933e-462b-956a-0dae66b09f01-kube-api-access-czff6\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.368868 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"f7ee8a13-933e-462b-956a-0dae66b09f01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391251 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391345 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391373 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-config\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391428 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48wzb\" (UniqueName: \"kubernetes.io/projected/59b0372a-c2ab-4955-96db-f7918c018f59-kube-api-access-48wzb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391460 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391501 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.391634 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.454515 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493047 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493134 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493209 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493240 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493261 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-config\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493310 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48wzb\" (UniqueName: \"kubernetes.io/projected/59b0372a-c2ab-4955-96db-f7918c018f59-kube-api-access-48wzb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.493339 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.494762 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-openstack-edpm-ipam\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.499604 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-config\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.499775 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-swift-storage-0\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.500180 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-sb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.500805 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-svc\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.501736 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-nb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.515420 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48wzb\" (UniqueName: \"kubernetes.io/projected/59b0372a-c2ab-4955-96db-f7918c018f59-kube-api-access-48wzb\") pod \"dnsmasq-dns-7d84b4d45c-vlrn2\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.684201 4712 generic.go:334] "Generic (PLEG): container finished" podID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerID="4b3189dc0e5e95f56ff7a7ab4af993cd6a3c5a0280c5d94b9bafcc777d386ef8" exitCode=137 Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.684450 4712 generic.go:334] "Generic (PLEG): container finished" podID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerID="e7f65e9725996b5430c165272394642af4b0191e34340a9577ad618356814e4b" exitCode=137 Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.684386 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"4b3189dc0e5e95f56ff7a7ab4af993cd6a3c5a0280c5d94b9bafcc777d386ef8"} Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.684504 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"e7f65e9725996b5430c165272394642af4b0191e34340a9577ad618356814e4b"} Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.684520 4712 scope.go:117] "RemoveContainer" containerID="33da2560c2b92663910c7a5cee80606f93009c5b03eae1dcf70e4946299645fb" Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.686432 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3dfaa353-4f23-4dab-a7c5-6156924b9350","Type":"ContainerStarted","Data":"3e43267be598f5c466c113b64ef5419b32afca75e1fc1819623c589a12eb4e95"} Jan 30 17:21:30 crc kubenswrapper[4712]: I0130 17:21:30.733638 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.047019 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 30 17:21:31 crc kubenswrapper[4712]: W0130 17:21:31.056882 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7ee8a13_933e_462b_956a_0dae66b09f01.slice/crio-7a74b4174435f9197c79658c86986b003502f21bdbe8b592c4289bb13dcfdc2d WatchSource:0}: Error finding container 7a74b4174435f9197c79658c86986b003502f21bdbe8b592c4289bb13dcfdc2d: Status 404 returned error can't find the container with id 7a74b4174435f9197c79658c86986b003502f21bdbe8b592c4289bb13dcfdc2d Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.121733 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.209514 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-scripts\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.209764 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-combined-ca-bundle\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.209932 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-config-data\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.209965 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-secret-key\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.210065 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nz64\" (UniqueName: \"kubernetes.io/projected/70154dd8-9d42-4a12-af9b-1be723ef892e-kube-api-access-4nz64\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.210089 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-tls-certs\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.210123 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70154dd8-9d42-4a12-af9b-1be723ef892e-logs\") pod \"70154dd8-9d42-4a12-af9b-1be723ef892e\" (UID: \"70154dd8-9d42-4a12-af9b-1be723ef892e\") " Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.211630 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70154dd8-9d42-4a12-af9b-1be723ef892e-logs" (OuterVolumeSpecName: "logs") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.263920 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.312482 4712 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/70154dd8-9d42-4a12-af9b-1be723ef892e-logs\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.312516 4712 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.359764 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70154dd8-9d42-4a12-af9b-1be723ef892e-kube-api-access-4nz64" (OuterVolumeSpecName: "kube-api-access-4nz64") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "kube-api-access-4nz64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.416599 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nz64\" (UniqueName: \"kubernetes.io/projected/70154dd8-9d42-4a12-af9b-1be723ef892e-kube-api-access-4nz64\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.580728 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-scripts" (OuterVolumeSpecName: "scripts") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.620353 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.629397 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vlrn2"] Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.700451 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7ee8a13-933e-462b-956a-0dae66b09f01","Type":"ContainerStarted","Data":"7a74b4174435f9197c79658c86986b003502f21bdbe8b592c4289bb13dcfdc2d"} Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.702346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" event={"ID":"59b0372a-c2ab-4955-96db-f7918c018f59","Type":"ContainerStarted","Data":"f205c806fdfecef3fee920bde5b3dd59b08d1c80d6abcc848da0a005327f554e"} Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.705158 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-56f8b66d48-7wr47" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.705166 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-56f8b66d48-7wr47" event={"ID":"70154dd8-9d42-4a12-af9b-1be723ef892e","Type":"ContainerDied","Data":"83e39abd704b4fd2a6badab202bb020c12313733ad1995a8eaa85b2d67860e22"} Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.705329 4712 scope.go:117] "RemoveContainer" containerID="4b3189dc0e5e95f56ff7a7ab4af993cd6a3c5a0280c5d94b9bafcc777d386ef8" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.812656 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5b67399-3a53-4694-8f1c-c04592426dcd" path="/var/lib/kubelet/pods/d5b67399-3a53-4694-8f1c-c04592426dcd/volumes" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.884331 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-config-data" (OuterVolumeSpecName: "config-data") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.885972 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.908388 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "70154dd8-9d42-4a12-af9b-1be723ef892e" (UID: "70154dd8-9d42-4a12-af9b-1be723ef892e"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.926567 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.926613 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/70154dd8-9d42-4a12-af9b-1be723ef892e-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:31 crc kubenswrapper[4712]: I0130 17:21:31.926626 4712 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/70154dd8-9d42-4a12-af9b-1be723ef892e-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.042048 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-56f8b66d48-7wr47"] Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.052990 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-56f8b66d48-7wr47"] Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.138058 4712 scope.go:117] "RemoveContainer" containerID="e7f65e9725996b5430c165272394642af4b0191e34340a9577ad618356814e4b" Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.715965 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3dfaa353-4f23-4dab-a7c5-6156924b9350","Type":"ContainerStarted","Data":"677d9527eb09131fcb6d9f2f7faaa8904e71915a2b50e45403289c920b3bec8b"} Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.718683 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7ee8a13-933e-462b-956a-0dae66b09f01","Type":"ContainerStarted","Data":"e0157aebbc7ca7beecf673c336fc0f3661cd75d149b6f9bf52df201e0c50617e"} Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.722041 4712 generic.go:334] "Generic (PLEG): container finished" podID="59b0372a-c2ab-4955-96db-f7918c018f59" containerID="ba2fcc528d4f6477fd2f65af3d2139bf1929db5f315fd8f4a1d5dd26f9b269cc" exitCode=0 Jan 30 17:21:32 crc kubenswrapper[4712]: I0130 17:21:32.722071 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" event={"ID":"59b0372a-c2ab-4955-96db-f7918c018f59","Type":"ContainerDied","Data":"ba2fcc528d4f6477fd2f65af3d2139bf1929db5f315fd8f4a1d5dd26f9b269cc"} Jan 30 17:21:33 crc kubenswrapper[4712]: I0130 17:21:33.736412 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" event={"ID":"59b0372a-c2ab-4955-96db-f7918c018f59","Type":"ContainerStarted","Data":"30a5a8b4625b954fe579308f1c8795608735bb280550a03b26c206d48d828790"} Jan 30 17:21:33 crc kubenswrapper[4712]: I0130 17:21:33.762838 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" podStartSLOduration=3.762789272 podStartE2EDuration="3.762789272s" podCreationTimestamp="2026-01-30 17:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:21:33.759457071 +0000 UTC m=+1630.666466550" watchObservedRunningTime="2026-01-30 17:21:33.762789272 +0000 UTC m=+1630.669798751" Jan 30 17:21:33 crc kubenswrapper[4712]: I0130 17:21:33.815449 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" path="/var/lib/kubelet/pods/70154dd8-9d42-4a12-af9b-1be723ef892e/volumes" Jan 30 17:21:34 crc kubenswrapper[4712]: I0130 17:21:34.761904 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:35 crc kubenswrapper[4712]: I0130 17:21:35.799573 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:21:35 crc kubenswrapper[4712]: E0130 17:21:35.799958 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:21:35 crc kubenswrapper[4712]: I0130 17:21:35.924199 4712 scope.go:117] "RemoveContainer" containerID="203f921b777a111d157536b392e36f4480f132934bed3c262a83b3ae4fa5fbe2" Jan 30 17:21:35 crc kubenswrapper[4712]: I0130 17:21:35.954312 4712 scope.go:117] "RemoveContainer" containerID="6b1338b18a4ad5f3e9405fd9439035f013b0591a978c5fddbbb84b304a0b47e1" Jan 30 17:21:36 crc kubenswrapper[4712]: I0130 17:21:36.042693 4712 scope.go:117] "RemoveContainer" containerID="694c98386931ad5c85c548dbbce61d4788ebed927a8acb5d0982c0fe4719f188" Jan 30 17:21:36 crc kubenswrapper[4712]: I0130 17:21:36.073651 4712 scope.go:117] "RemoveContainer" containerID="1ede7b2f9b14ef955d37db9dfacc2cbd61eb73decc65aee52e83fe5bc65c747e" Jan 30 17:21:36 crc kubenswrapper[4712]: I0130 17:21:36.114573 4712 scope.go:117] "RemoveContainer" containerID="8558901fcf7ccc5260b00e0f57c2e854f5c9fcb998f2224c46191b11d94562a5" Jan 30 17:21:37 crc kubenswrapper[4712]: I0130 17:21:37.482868 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:37 crc kubenswrapper[4712]: I0130 17:21:37.538874 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlns9"] Jan 30 17:21:37 crc kubenswrapper[4712]: I0130 17:21:37.794355 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nlns9" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="registry-server" containerID="cri-o://458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb" gracePeriod=2 Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.271609 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.363915 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-utilities\") pod \"9be8718d-8f88-4536-a5d3-29cdd21959b9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.364041 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dkk7s\" (UniqueName: \"kubernetes.io/projected/9be8718d-8f88-4536-a5d3-29cdd21959b9-kube-api-access-dkk7s\") pod \"9be8718d-8f88-4536-a5d3-29cdd21959b9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.364212 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-catalog-content\") pod \"9be8718d-8f88-4536-a5d3-29cdd21959b9\" (UID: \"9be8718d-8f88-4536-a5d3-29cdd21959b9\") " Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.365895 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-utilities" (OuterVolumeSpecName: "utilities") pod "9be8718d-8f88-4536-a5d3-29cdd21959b9" (UID: "9be8718d-8f88-4536-a5d3-29cdd21959b9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.371776 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9be8718d-8f88-4536-a5d3-29cdd21959b9-kube-api-access-dkk7s" (OuterVolumeSpecName: "kube-api-access-dkk7s") pod "9be8718d-8f88-4536-a5d3-29cdd21959b9" (UID: "9be8718d-8f88-4536-a5d3-29cdd21959b9"). InnerVolumeSpecName "kube-api-access-dkk7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.428176 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9be8718d-8f88-4536-a5d3-29cdd21959b9" (UID: "9be8718d-8f88-4536-a5d3-29cdd21959b9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.466404 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.466450 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9be8718d-8f88-4536-a5d3-29cdd21959b9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.466461 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dkk7s\" (UniqueName: \"kubernetes.io/projected/9be8718d-8f88-4536-a5d3-29cdd21959b9-kube-api-access-dkk7s\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.808757 4712 generic.go:334] "Generic (PLEG): container finished" podID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerID="458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb" exitCode=0 Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.808848 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerDied","Data":"458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb"} Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.808945 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nlns9" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.809011 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nlns9" event={"ID":"9be8718d-8f88-4536-a5d3-29cdd21959b9","Type":"ContainerDied","Data":"30464b5d53e7578415eaa4f7c2f4551c5489f5578a1bba3432d17e4713d51269"} Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.809067 4712 scope.go:117] "RemoveContainer" containerID="458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.832346 4712 scope.go:117] "RemoveContainer" containerID="0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.869845 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nlns9"] Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.872216 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nlns9"] Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.872706 4712 scope.go:117] "RemoveContainer" containerID="03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.930953 4712 scope.go:117] "RemoveContainer" containerID="458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb" Jan 30 17:21:38 crc kubenswrapper[4712]: E0130 17:21:38.931410 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb\": container with ID starting with 458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb not found: ID does not exist" containerID="458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.931488 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb"} err="failed to get container status \"458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb\": rpc error: code = NotFound desc = could not find container \"458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb\": container with ID starting with 458cb8b388352a473fdcbf7687dcc68a672a0e5367941a0771f3d477e78387fb not found: ID does not exist" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.931527 4712 scope.go:117] "RemoveContainer" containerID="0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061" Jan 30 17:21:38 crc kubenswrapper[4712]: E0130 17:21:38.932187 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061\": container with ID starting with 0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061 not found: ID does not exist" containerID="0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.932251 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061"} err="failed to get container status \"0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061\": rpc error: code = NotFound desc = could not find container \"0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061\": container with ID starting with 0968b5e95029dfbb7f4b7569c147b93418a9138e4f4658e6b0aa217166bb3061 not found: ID does not exist" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.932290 4712 scope.go:117] "RemoveContainer" containerID="03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d" Jan 30 17:21:38 crc kubenswrapper[4712]: E0130 17:21:38.932813 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d\": container with ID starting with 03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d not found: ID does not exist" containerID="03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d" Jan 30 17:21:38 crc kubenswrapper[4712]: I0130 17:21:38.932843 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d"} err="failed to get container status \"03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d\": rpc error: code = NotFound desc = could not find container \"03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d\": container with ID starting with 03bb7ed7b38cc376fa14d718b99371bbc394710bd3b8394c553640ed56cc1b7d not found: ID does not exist" Jan 30 17:21:39 crc kubenswrapper[4712]: I0130 17:21:39.811426 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" path="/var/lib/kubelet/pods/9be8718d-8f88-4536-a5d3-29cdd21959b9/volumes" Jan 30 17:21:40 crc kubenswrapper[4712]: I0130 17:21:40.738001 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:40 crc kubenswrapper[4712]: I0130 17:21:40.824037 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l"] Jan 30 17:21:40 crc kubenswrapper[4712]: I0130 17:21:40.824946 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerName="dnsmasq-dns" containerID="cri-o://d3cd43e559578a4aba8f3735361377917d67f663e7e17a8721fef53a4b01643e" gracePeriod=10 Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.065694 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-66498674f5-zng48"] Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066381 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="registry-server" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066398 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="registry-server" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066409 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066417 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066428 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon-log" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066435 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon-log" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066450 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="extract-utilities" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066456 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="extract-utilities" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066479 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066485 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066491 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066497 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066507 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066512 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.066520 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="extract-content" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.066526 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="extract-content" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.069125 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.069153 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.069206 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.069217 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon-log" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.069226 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.069241 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9be8718d-8f88-4536-a5d3-29cdd21959b9" containerName="registry-server" Jan 30 17:21:41 crc kubenswrapper[4712]: E0130 17:21:41.070253 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.070272 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.070459 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="70154dd8-9d42-4a12-af9b-1be723ef892e" containerName="horizon" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.071113 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.095830 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66498674f5-zng48"] Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119341 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-dns-swift-storage-0\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119378 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-ovsdbserver-nb\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119405 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7xgpm\" (UniqueName: \"kubernetes.io/projected/63fe393d-be88-472a-8f77-0c395d5fdf6b-kube-api-access-7xgpm\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119437 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-config\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119458 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-dns-svc\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119494 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-ovsdbserver-sb\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.119523 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.221460 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-dns-swift-storage-0\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.221784 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-ovsdbserver-nb\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.221890 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7xgpm\" (UniqueName: \"kubernetes.io/projected/63fe393d-be88-472a-8f77-0c395d5fdf6b-kube-api-access-7xgpm\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.221983 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-config\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.222083 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-dns-svc\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.222172 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-ovsdbserver-sb\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.222257 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.222415 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-dns-swift-storage-0\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.223032 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-config\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.223254 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-openstack-edpm-ipam\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.223698 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-ovsdbserver-nb\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.223920 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-dns-svc\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.224463 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63fe393d-be88-472a-8f77-0c395d5fdf6b-ovsdbserver-sb\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.247934 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7xgpm\" (UniqueName: \"kubernetes.io/projected/63fe393d-be88-472a-8f77-0c395d5fdf6b-kube-api-access-7xgpm\") pod \"dnsmasq-dns-66498674f5-zng48\" (UID: \"63fe393d-be88-472a-8f77-0c395d5fdf6b\") " pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.423393 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.862879 4712 generic.go:334] "Generic (PLEG): container finished" podID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerID="d3cd43e559578a4aba8f3735361377917d67f663e7e17a8721fef53a4b01643e" exitCode=0 Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.863391 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" event={"ID":"fec70295-88d5-49c5-9d39-e9bee0a17010","Type":"ContainerDied","Data":"d3cd43e559578a4aba8f3735361377917d67f663e7e17a8721fef53a4b01643e"} Jan 30 17:21:41 crc kubenswrapper[4712]: I0130 17:21:41.918986 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-66498674f5-zng48"] Jan 30 17:21:41 crc kubenswrapper[4712]: W0130 17:21:41.927021 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod63fe393d_be88_472a_8f77_0c395d5fdf6b.slice/crio-981112941e919d67693e4a85ca35af68b25feafdce014bd5bed9e79f992959b9 WatchSource:0}: Error finding container 981112941e919d67693e4a85ca35af68b25feafdce014bd5bed9e79f992959b9: Status 404 returned error can't find the container with id 981112941e919d67693e4a85ca35af68b25feafdce014bd5bed9e79f992959b9 Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.136986 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.250505 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-svc\") pod \"fec70295-88d5-49c5-9d39-e9bee0a17010\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.250562 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-sb\") pod \"fec70295-88d5-49c5-9d39-e9bee0a17010\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.250636 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74ktx\" (UniqueName: \"kubernetes.io/projected/fec70295-88d5-49c5-9d39-e9bee0a17010-kube-api-access-74ktx\") pod \"fec70295-88d5-49c5-9d39-e9bee0a17010\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.250675 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-swift-storage-0\") pod \"fec70295-88d5-49c5-9d39-e9bee0a17010\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.251903 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-nb\") pod \"fec70295-88d5-49c5-9d39-e9bee0a17010\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.251999 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-config\") pod \"fec70295-88d5-49c5-9d39-e9bee0a17010\" (UID: \"fec70295-88d5-49c5-9d39-e9bee0a17010\") " Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.259434 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fec70295-88d5-49c5-9d39-e9bee0a17010-kube-api-access-74ktx" (OuterVolumeSpecName: "kube-api-access-74ktx") pod "fec70295-88d5-49c5-9d39-e9bee0a17010" (UID: "fec70295-88d5-49c5-9d39-e9bee0a17010"). InnerVolumeSpecName "kube-api-access-74ktx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.304204 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fec70295-88d5-49c5-9d39-e9bee0a17010" (UID: "fec70295-88d5-49c5-9d39-e9bee0a17010"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.305093 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fec70295-88d5-49c5-9d39-e9bee0a17010" (UID: "fec70295-88d5-49c5-9d39-e9bee0a17010"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.307891 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fec70295-88d5-49c5-9d39-e9bee0a17010" (UID: "fec70295-88d5-49c5-9d39-e9bee0a17010"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.313448 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-config" (OuterVolumeSpecName: "config") pod "fec70295-88d5-49c5-9d39-e9bee0a17010" (UID: "fec70295-88d5-49c5-9d39-e9bee0a17010"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.325921 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "fec70295-88d5-49c5-9d39-e9bee0a17010" (UID: "fec70295-88d5-49c5-9d39-e9bee0a17010"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.354756 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.354781 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.354804 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74ktx\" (UniqueName: \"kubernetes.io/projected/fec70295-88d5-49c5-9d39-e9bee0a17010-kube-api-access-74ktx\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.354816 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.354831 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.354845 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fec70295-88d5-49c5-9d39-e9bee0a17010-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.874069 4712 generic.go:334] "Generic (PLEG): container finished" podID="63fe393d-be88-472a-8f77-0c395d5fdf6b" containerID="e30b07bbff9411a76c7d8384fd2e6fcdf5e7e5ea5a9ddc7f9157891411736537" exitCode=0 Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.874155 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498674f5-zng48" event={"ID":"63fe393d-be88-472a-8f77-0c395d5fdf6b","Type":"ContainerDied","Data":"e30b07bbff9411a76c7d8384fd2e6fcdf5e7e5ea5a9ddc7f9157891411736537"} Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.874209 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498674f5-zng48" event={"ID":"63fe393d-be88-472a-8f77-0c395d5fdf6b","Type":"ContainerStarted","Data":"981112941e919d67693e4a85ca35af68b25feafdce014bd5bed9e79f992959b9"} Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.876397 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" event={"ID":"fec70295-88d5-49c5-9d39-e9bee0a17010","Type":"ContainerDied","Data":"8e29e263e528b3cf8d926d20f09e7d2e0aedb8e04ffcde16a5f312e4ebd0839f"} Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.876450 4712 scope.go:117] "RemoveContainer" containerID="d3cd43e559578a4aba8f3735361377917d67f663e7e17a8721fef53a4b01643e" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.876654 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.918028 4712 scope.go:117] "RemoveContainer" containerID="be14d4d1f73c9215675c5638c5633048e4931006e15fe5c4bf57f153fcf59399" Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.921908 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l"] Jan 30 17:21:42 crc kubenswrapper[4712]: I0130 17:21:42.932757 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6b7bbf7cf9-dtb9l"] Jan 30 17:21:43 crc kubenswrapper[4712]: I0130 17:21:43.810956 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" path="/var/lib/kubelet/pods/fec70295-88d5-49c5-9d39-e9bee0a17010/volumes" Jan 30 17:21:43 crc kubenswrapper[4712]: I0130 17:21:43.886069 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-66498674f5-zng48" event={"ID":"63fe393d-be88-472a-8f77-0c395d5fdf6b","Type":"ContainerStarted","Data":"4f2a1392a62c5f9d0301fcc39bfde563bc65dfad92aa18eefa25009e9a7fc99d"} Jan 30 17:21:43 crc kubenswrapper[4712]: I0130 17:21:43.886159 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:43 crc kubenswrapper[4712]: I0130 17:21:43.907674 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-66498674f5-zng48" podStartSLOduration=2.9076549739999997 podStartE2EDuration="2.907654974s" podCreationTimestamp="2026-01-30 17:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:21:43.905997294 +0000 UTC m=+1640.813006793" watchObservedRunningTime="2026-01-30 17:21:43.907654974 +0000 UTC m=+1640.814664443" Jan 30 17:21:50 crc kubenswrapper[4712]: I0130 17:21:50.799898 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:21:50 crc kubenswrapper[4712]: E0130 17:21:50.800651 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:21:51 crc kubenswrapper[4712]: I0130 17:21:51.425113 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-66498674f5-zng48" Jan 30 17:21:51 crc kubenswrapper[4712]: I0130 17:21:51.554054 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vlrn2"] Jan 30 17:21:51 crc kubenswrapper[4712]: I0130 17:21:51.554313 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" containerName="dnsmasq-dns" containerID="cri-o://30a5a8b4625b954fe579308f1c8795608735bb280550a03b26c206d48d828790" gracePeriod=10 Jan 30 17:21:51 crc kubenswrapper[4712]: I0130 17:21:51.988581 4712 generic.go:334] "Generic (PLEG): container finished" podID="59b0372a-c2ab-4955-96db-f7918c018f59" containerID="30a5a8b4625b954fe579308f1c8795608735bb280550a03b26c206d48d828790" exitCode=0 Jan 30 17:21:51 crc kubenswrapper[4712]: I0130 17:21:51.988844 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" event={"ID":"59b0372a-c2ab-4955-96db-f7918c018f59","Type":"ContainerDied","Data":"30a5a8b4625b954fe579308f1c8795608735bb280550a03b26c206d48d828790"} Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.188878 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.346999 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-swift-storage-0\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.347130 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-sb\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.347153 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-openstack-edpm-ipam\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.347200 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-config\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.347253 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-svc\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.347290 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48wzb\" (UniqueName: \"kubernetes.io/projected/59b0372a-c2ab-4955-96db-f7918c018f59-kube-api-access-48wzb\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.347406 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-nb\") pod \"59b0372a-c2ab-4955-96db-f7918c018f59\" (UID: \"59b0372a-c2ab-4955-96db-f7918c018f59\") " Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.358167 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b0372a-c2ab-4955-96db-f7918c018f59-kube-api-access-48wzb" (OuterVolumeSpecName: "kube-api-access-48wzb") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "kube-api-access-48wzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.419547 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.420473 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.438286 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.459543 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.459586 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.459600 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48wzb\" (UniqueName: \"kubernetes.io/projected/59b0372a-c2ab-4955-96db-f7918c018f59-kube-api-access-48wzb\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.459613 4712 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.470413 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-config" (OuterVolumeSpecName: "config") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.472110 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.476810 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "59b0372a-c2ab-4955-96db-f7918c018f59" (UID: "59b0372a-c2ab-4955-96db-f7918c018f59"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.567208 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-config\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.567250 4712 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:52 crc kubenswrapper[4712]: I0130 17:21:52.567268 4712 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/59b0372a-c2ab-4955-96db-f7918c018f59-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:52.999678 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" event={"ID":"59b0372a-c2ab-4955-96db-f7918c018f59","Type":"ContainerDied","Data":"f205c806fdfecef3fee920bde5b3dd59b08d1c80d6abcc848da0a005327f554e"} Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:52.999734 4712 scope.go:117] "RemoveContainer" containerID="30a5a8b4625b954fe579308f1c8795608735bb280550a03b26c206d48d828790" Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:52.999956 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7d84b4d45c-vlrn2" Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:53.029486 4712 scope.go:117] "RemoveContainer" containerID="ba2fcc528d4f6477fd2f65af3d2139bf1929db5f315fd8f4a1d5dd26f9b269cc" Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:53.045984 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vlrn2"] Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:53.053300 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7d84b4d45c-vlrn2"] Jan 30 17:21:53 crc kubenswrapper[4712]: I0130 17:21:53.810775 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" path="/var/lib/kubelet/pods/59b0372a-c2ab-4955-96db-f7918c018f59/volumes" Jan 30 17:22:02 crc kubenswrapper[4712]: I0130 17:22:02.800100 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:22:02 crc kubenswrapper[4712]: E0130 17:22:02.801145 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:22:05 crc kubenswrapper[4712]: I0130 17:22:05.127355 4712 generic.go:334] "Generic (PLEG): container finished" podID="3dfaa353-4f23-4dab-a7c5-6156924b9350" containerID="677d9527eb09131fcb6d9f2f7faaa8904e71915a2b50e45403289c920b3bec8b" exitCode=0 Jan 30 17:22:05 crc kubenswrapper[4712]: I0130 17:22:05.127464 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3dfaa353-4f23-4dab-a7c5-6156924b9350","Type":"ContainerDied","Data":"677d9527eb09131fcb6d9f2f7faaa8904e71915a2b50e45403289c920b3bec8b"} Jan 30 17:22:05 crc kubenswrapper[4712]: I0130 17:22:05.131318 4712 generic.go:334] "Generic (PLEG): container finished" podID="f7ee8a13-933e-462b-956a-0dae66b09f01" containerID="e0157aebbc7ca7beecf673c336fc0f3661cd75d149b6f9bf52df201e0c50617e" exitCode=0 Jan 30 17:22:05 crc kubenswrapper[4712]: I0130 17:22:05.131349 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7ee8a13-933e-462b-956a-0dae66b09f01","Type":"ContainerDied","Data":"e0157aebbc7ca7beecf673c336fc0f3661cd75d149b6f9bf52df201e0c50617e"} Jan 30 17:22:06 crc kubenswrapper[4712]: I0130 17:22:06.141588 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"f7ee8a13-933e-462b-956a-0dae66b09f01","Type":"ContainerStarted","Data":"792f548485f34f260bcf9534630adefa33fe145c9ab40f7e7d0359ec257edefc"} Jan 30 17:22:06 crc kubenswrapper[4712]: I0130 17:22:06.142973 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:22:06 crc kubenswrapper[4712]: I0130 17:22:06.145157 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3dfaa353-4f23-4dab-a7c5-6156924b9350","Type":"ContainerStarted","Data":"f7c78bbdea5fc37439051a7e9bff86a4f1deb86da055850d2dadffb2024165cc"} Jan 30 17:22:06 crc kubenswrapper[4712]: I0130 17:22:06.145360 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 30 17:22:06 crc kubenswrapper[4712]: I0130 17:22:06.184776 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.184759687 podStartE2EDuration="36.184759687s" podCreationTimestamp="2026-01-30 17:21:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:22:06.183856945 +0000 UTC m=+1663.090866424" watchObservedRunningTime="2026-01-30 17:22:06.184759687 +0000 UTC m=+1663.091769146" Jan 30 17:22:06 crc kubenswrapper[4712]: I0130 17:22:06.215326 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.215308926 podStartE2EDuration="38.215308926s" podCreationTimestamp="2026-01-30 17:21:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:22:06.208184513 +0000 UTC m=+1663.115193982" watchObservedRunningTime="2026-01-30 17:22:06.215308926 +0000 UTC m=+1663.122318395" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.548066 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf"] Jan 30 17:22:08 crc kubenswrapper[4712]: E0130 17:22:08.550310 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerName="init" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.550427 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerName="init" Jan 30 17:22:08 crc kubenswrapper[4712]: E0130 17:22:08.550537 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" containerName="dnsmasq-dns" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.550615 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" containerName="dnsmasq-dns" Jan 30 17:22:08 crc kubenswrapper[4712]: E0130 17:22:08.550712 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" containerName="init" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.550825 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" containerName="init" Jan 30 17:22:08 crc kubenswrapper[4712]: E0130 17:22:08.550921 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerName="dnsmasq-dns" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.550999 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerName="dnsmasq-dns" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.551666 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b0372a-c2ab-4955-96db-f7918c018f59" containerName="dnsmasq-dns" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.551770 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fec70295-88d5-49c5-9d39-e9bee0a17010" containerName="dnsmasq-dns" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.553015 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.557264 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.557929 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.558765 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.559040 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.614689 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf"] Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.684760 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.684902 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.685022 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.685045 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tlrr\" (UniqueName: \"kubernetes.io/projected/651b3d64-8c79-4079-ad2c-6a55ce87cd36-kube-api-access-5tlrr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.786490 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.786539 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tlrr\" (UniqueName: \"kubernetes.io/projected/651b3d64-8c79-4079-ad2c-6a55ce87cd36-kube-api-access-5tlrr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.786619 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.786673 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.803144 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.807909 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.808008 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.808863 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tlrr\" (UniqueName: \"kubernetes.io/projected/651b3d64-8c79-4079-ad2c-6a55ce87cd36-kube-api-access-5tlrr\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:08 crc kubenswrapper[4712]: I0130 17:22:08.891508 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:09 crc kubenswrapper[4712]: I0130 17:22:09.514809 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf"] Jan 30 17:22:10 crc kubenswrapper[4712]: I0130 17:22:10.186836 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" event={"ID":"651b3d64-8c79-4079-ad2c-6a55ce87cd36","Type":"ContainerStarted","Data":"e99fb7582e07618c2c67f32e167bcd888bed81dc1a371e232aea64812c4935b4"} Jan 30 17:22:16 crc kubenswrapper[4712]: I0130 17:22:16.800452 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:22:16 crc kubenswrapper[4712]: E0130 17:22:16.801528 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:22:19 crc kubenswrapper[4712]: I0130 17:22:19.369909 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="3dfaa353-4f23-4dab-a7c5-6156924b9350" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.229:5671: connect: connection refused" Jan 30 17:22:20 crc kubenswrapper[4712]: I0130 17:22:20.697471 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 30 17:22:22 crc kubenswrapper[4712]: E0130 17:22:22.683781 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 30 17:22:22 crc kubenswrapper[4712]: E0130 17:22:22.687320 4712 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 30 17:22:22 crc kubenswrapper[4712]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 30 17:22:22 crc kubenswrapper[4712]: - hosts: all Jan 30 17:22:22 crc kubenswrapper[4712]: strategy: linear Jan 30 17:22:22 crc kubenswrapper[4712]: tasks: Jan 30 17:22:22 crc kubenswrapper[4712]: - name: Enable podified-repos Jan 30 17:22:22 crc kubenswrapper[4712]: become: true Jan 30 17:22:22 crc kubenswrapper[4712]: ansible.builtin.shell: | Jan 30 17:22:22 crc kubenswrapper[4712]: set -euxo pipefail Jan 30 17:22:22 crc kubenswrapper[4712]: pushd /var/tmp Jan 30 17:22:22 crc kubenswrapper[4712]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 30 17:22:22 crc kubenswrapper[4712]: pushd repo-setup-main Jan 30 17:22:22 crc kubenswrapper[4712]: python3 -m venv ./venv Jan 30 17:22:22 crc kubenswrapper[4712]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 30 17:22:22 crc kubenswrapper[4712]: ./venv/bin/repo-setup current-podified -b antelope Jan 30 17:22:22 crc kubenswrapper[4712]: popd Jan 30 17:22:22 crc kubenswrapper[4712]: rm -rf repo-setup-main Jan 30 17:22:22 crc kubenswrapper[4712]: Jan 30 17:22:22 crc kubenswrapper[4712]: Jan 30 17:22:22 crc kubenswrapper[4712]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 30 17:22:22 crc kubenswrapper[4712]: edpm_override_hosts: openstack-edpm-ipam Jan 30 17:22:22 crc kubenswrapper[4712]: edpm_service_type: repo-setup Jan 30 17:22:22 crc kubenswrapper[4712]: Jan 30 17:22:22 crc kubenswrapper[4712]: Jan 30 17:22:22 crc kubenswrapper[4712]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5tlrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf_openstack(651b3d64-8c79-4079-ad2c-6a55ce87cd36): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 30 17:22:22 crc kubenswrapper[4712]: > logger="UnhandledError" Jan 30 17:22:22 crc kubenswrapper[4712]: E0130 17:22:22.688983 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" podUID="651b3d64-8c79-4079-ad2c-6a55ce87cd36" Jan 30 17:22:22 crc kubenswrapper[4712]: E0130 17:22:22.785444 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" podUID="651b3d64-8c79-4079-ad2c-6a55ce87cd36" Jan 30 17:22:27 crc kubenswrapper[4712]: I0130 17:22:27.800079 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:22:27 crc kubenswrapper[4712]: E0130 17:22:27.800848 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:22:29 crc kubenswrapper[4712]: I0130 17:22:29.369939 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 30 17:22:36 crc kubenswrapper[4712]: I0130 17:22:36.375586 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:22:36 crc kubenswrapper[4712]: I0130 17:22:36.933916 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" event={"ID":"651b3d64-8c79-4079-ad2c-6a55ce87cd36","Type":"ContainerStarted","Data":"4ec11e54521b379d8ec197704596167f09a67055e2560790fd476f6913808fa6"} Jan 30 17:22:36 crc kubenswrapper[4712]: I0130 17:22:36.966084 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" podStartSLOduration=2.121197531 podStartE2EDuration="28.966064666s" podCreationTimestamp="2026-01-30 17:22:08 +0000 UTC" firstStartedPulling="2026-01-30 17:22:09.52817253 +0000 UTC m=+1666.435181999" lastFinishedPulling="2026-01-30 17:22:36.373039665 +0000 UTC m=+1693.280049134" observedRunningTime="2026-01-30 17:22:36.962012938 +0000 UTC m=+1693.869022417" watchObservedRunningTime="2026-01-30 17:22:36.966064666 +0000 UTC m=+1693.873074135" Jan 30 17:22:38 crc kubenswrapper[4712]: I0130 17:22:38.800006 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:22:38 crc kubenswrapper[4712]: E0130 17:22:38.800492 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.590354 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bs7pg"] Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.592695 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.611593 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bs7pg"] Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.759811 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaba725b-6442-4a5b-adc9-16047823dc86-catalog-content\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.759865 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaba725b-6442-4a5b-adc9-16047823dc86-utilities\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.759901 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vdhk\" (UniqueName: \"kubernetes.io/projected/eaba725b-6442-4a5b-adc9-16047823dc86-kube-api-access-6vdhk\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.861869 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaba725b-6442-4a5b-adc9-16047823dc86-utilities\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.862728 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eaba725b-6442-4a5b-adc9-16047823dc86-utilities\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.863419 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vdhk\" (UniqueName: \"kubernetes.io/projected/eaba725b-6442-4a5b-adc9-16047823dc86-kube-api-access-6vdhk\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.863730 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaba725b-6442-4a5b-adc9-16047823dc86-catalog-content\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.865029 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eaba725b-6442-4a5b-adc9-16047823dc86-catalog-content\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.889258 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vdhk\" (UniqueName: \"kubernetes.io/projected/eaba725b-6442-4a5b-adc9-16047823dc86-kube-api-access-6vdhk\") pod \"certified-operators-bs7pg\" (UID: \"eaba725b-6442-4a5b-adc9-16047823dc86\") " pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:47 crc kubenswrapper[4712]: I0130 17:22:47.917396 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:22:48 crc kubenswrapper[4712]: W0130 17:22:48.469501 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeaba725b_6442_4a5b_adc9_16047823dc86.slice/crio-bf767e5528d3931bcc76a4a173f1e4816457a74bb09b2fed1caeeee73be5c3a2 WatchSource:0}: Error finding container bf767e5528d3931bcc76a4a173f1e4816457a74bb09b2fed1caeeee73be5c3a2: Status 404 returned error can't find the container with id bf767e5528d3931bcc76a4a173f1e4816457a74bb09b2fed1caeeee73be5c3a2 Jan 30 17:22:48 crc kubenswrapper[4712]: I0130 17:22:48.476273 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bs7pg"] Jan 30 17:22:49 crc kubenswrapper[4712]: I0130 17:22:49.037933 4712 generic.go:334] "Generic (PLEG): container finished" podID="eaba725b-6442-4a5b-adc9-16047823dc86" containerID="e209326d964954e7c5cda8361c5ca9ce825238eae3e2a56d5f243a7181d0ac30" exitCode=0 Jan 30 17:22:49 crc kubenswrapper[4712]: I0130 17:22:49.038019 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bs7pg" event={"ID":"eaba725b-6442-4a5b-adc9-16047823dc86","Type":"ContainerDied","Data":"e209326d964954e7c5cda8361c5ca9ce825238eae3e2a56d5f243a7181d0ac30"} Jan 30 17:22:49 crc kubenswrapper[4712]: I0130 17:22:49.038207 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bs7pg" event={"ID":"eaba725b-6442-4a5b-adc9-16047823dc86","Type":"ContainerStarted","Data":"bf767e5528d3931bcc76a4a173f1e4816457a74bb09b2fed1caeeee73be5c3a2"} Jan 30 17:22:50 crc kubenswrapper[4712]: I0130 17:22:50.047550 4712 generic.go:334] "Generic (PLEG): container finished" podID="651b3d64-8c79-4079-ad2c-6a55ce87cd36" containerID="4ec11e54521b379d8ec197704596167f09a67055e2560790fd476f6913808fa6" exitCode=0 Jan 30 17:22:50 crc kubenswrapper[4712]: I0130 17:22:50.047596 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" event={"ID":"651b3d64-8c79-4079-ad2c-6a55ce87cd36","Type":"ContainerDied","Data":"4ec11e54521b379d8ec197704596167f09a67055e2560790fd476f6913808fa6"} Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.569495 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.651901 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tlrr\" (UniqueName: \"kubernetes.io/projected/651b3d64-8c79-4079-ad2c-6a55ce87cd36-kube-api-access-5tlrr\") pod \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.651991 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-ssh-key-openstack-edpm-ipam\") pod \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.662955 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/651b3d64-8c79-4079-ad2c-6a55ce87cd36-kube-api-access-5tlrr" (OuterVolumeSpecName: "kube-api-access-5tlrr") pod "651b3d64-8c79-4079-ad2c-6a55ce87cd36" (UID: "651b3d64-8c79-4079-ad2c-6a55ce87cd36"). InnerVolumeSpecName "kube-api-access-5tlrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.687005 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "651b3d64-8c79-4079-ad2c-6a55ce87cd36" (UID: "651b3d64-8c79-4079-ad2c-6a55ce87cd36"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.754612 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-inventory\") pod \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.754671 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-repo-setup-combined-ca-bundle\") pod \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\" (UID: \"651b3d64-8c79-4079-ad2c-6a55ce87cd36\") " Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.758576 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tlrr\" (UniqueName: \"kubernetes.io/projected/651b3d64-8c79-4079-ad2c-6a55ce87cd36-kube-api-access-5tlrr\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.758606 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.765051 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "651b3d64-8c79-4079-ad2c-6a55ce87cd36" (UID: "651b3d64-8c79-4079-ad2c-6a55ce87cd36"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.794047 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-inventory" (OuterVolumeSpecName: "inventory") pod "651b3d64-8c79-4079-ad2c-6a55ce87cd36" (UID: "651b3d64-8c79-4079-ad2c-6a55ce87cd36"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.804321 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:22:51 crc kubenswrapper[4712]: E0130 17:22:51.804564 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.861427 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:51 crc kubenswrapper[4712]: I0130 17:22:51.861453 4712 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/651b3d64-8c79-4079-ad2c-6a55ce87cd36-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.067229 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" event={"ID":"651b3d64-8c79-4079-ad2c-6a55ce87cd36","Type":"ContainerDied","Data":"e99fb7582e07618c2c67f32e167bcd888bed81dc1a371e232aea64812c4935b4"} Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.067550 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e99fb7582e07618c2c67f32e167bcd888bed81dc1a371e232aea64812c4935b4" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.067646 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.179588 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn"] Jan 30 17:22:52 crc kubenswrapper[4712]: E0130 17:22:52.180166 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="651b3d64-8c79-4079-ad2c-6a55ce87cd36" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.180196 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="651b3d64-8c79-4079-ad2c-6a55ce87cd36" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.180431 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="651b3d64-8c79-4079-ad2c-6a55ce87cd36" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.181294 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.183644 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.183906 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.184072 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.184516 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.192635 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn"] Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.269273 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcptw\" (UniqueName: \"kubernetes.io/projected/fd818085-3429-43ff-bb05-2aaf3d48dd7b-kube-api-access-wcptw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.269424 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.269509 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.371343 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcptw\" (UniqueName: \"kubernetes.io/projected/fd818085-3429-43ff-bb05-2aaf3d48dd7b-kube-api-access-wcptw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.371538 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.371608 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.375985 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.376432 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.392183 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcptw\" (UniqueName: \"kubernetes.io/projected/fd818085-3429-43ff-bb05-2aaf3d48dd7b-kube-api-access-wcptw\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-x2znn\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:52 crc kubenswrapper[4712]: I0130 17:22:52.502277 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:22:53 crc kubenswrapper[4712]: I0130 17:22:53.103169 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn"] Jan 30 17:22:54 crc kubenswrapper[4712]: I0130 17:22:54.090968 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" event={"ID":"fd818085-3429-43ff-bb05-2aaf3d48dd7b","Type":"ContainerStarted","Data":"c13d2eb4b98fb927e3943fceaaaec33c139ed3ba01d5acf00721634dff19113f"} Jan 30 17:22:59 crc kubenswrapper[4712]: I0130 17:22:59.137904 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" event={"ID":"fd818085-3429-43ff-bb05-2aaf3d48dd7b","Type":"ContainerStarted","Data":"21d52ef021d95cb0ab795a31e95c055508d2dae358dd21513e887ddd94958623"} Jan 30 17:22:59 crc kubenswrapper[4712]: I0130 17:22:59.140244 4712 generic.go:334] "Generic (PLEG): container finished" podID="eaba725b-6442-4a5b-adc9-16047823dc86" containerID="f12dc06029c09376304597a2fb6f146190f9d2326e9387ea80b4c7a70783fcc0" exitCode=0 Jan 30 17:22:59 crc kubenswrapper[4712]: I0130 17:22:59.140287 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bs7pg" event={"ID":"eaba725b-6442-4a5b-adc9-16047823dc86","Type":"ContainerDied","Data":"f12dc06029c09376304597a2fb6f146190f9d2326e9387ea80b4c7a70783fcc0"} Jan 30 17:22:59 crc kubenswrapper[4712]: I0130 17:22:59.179098 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" podStartSLOduration=2.073944994 podStartE2EDuration="7.179080541s" podCreationTimestamp="2026-01-30 17:22:52 +0000 UTC" firstStartedPulling="2026-01-30 17:22:53.112421062 +0000 UTC m=+1710.019430521" lastFinishedPulling="2026-01-30 17:22:58.217556589 +0000 UTC m=+1715.124566068" observedRunningTime="2026-01-30 17:22:59.177963284 +0000 UTC m=+1716.084972763" watchObservedRunningTime="2026-01-30 17:22:59.179080541 +0000 UTC m=+1716.086090010" Jan 30 17:23:00 crc kubenswrapper[4712]: I0130 17:23:00.150508 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bs7pg" event={"ID":"eaba725b-6442-4a5b-adc9-16047823dc86","Type":"ContainerStarted","Data":"5c37958958798a3e0f430ae79e1784a4d3cdccccda33a59ee1186b5b1ba38880"} Jan 30 17:23:00 crc kubenswrapper[4712]: I0130 17:23:00.176708 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bs7pg" podStartSLOduration=2.555499025 podStartE2EDuration="13.176685266s" podCreationTimestamp="2026-01-30 17:22:47 +0000 UTC" firstStartedPulling="2026-01-30 17:22:49.040232414 +0000 UTC m=+1705.947241893" lastFinishedPulling="2026-01-30 17:22:59.661418675 +0000 UTC m=+1716.568428134" observedRunningTime="2026-01-30 17:23:00.1657137 +0000 UTC m=+1717.072723169" watchObservedRunningTime="2026-01-30 17:23:00.176685266 +0000 UTC m=+1717.083694735" Jan 30 17:23:05 crc kubenswrapper[4712]: I0130 17:23:05.199917 4712 generic.go:334] "Generic (PLEG): container finished" podID="fd818085-3429-43ff-bb05-2aaf3d48dd7b" containerID="21d52ef021d95cb0ab795a31e95c055508d2dae358dd21513e887ddd94958623" exitCode=0 Jan 30 17:23:05 crc kubenswrapper[4712]: I0130 17:23:05.200231 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" event={"ID":"fd818085-3429-43ff-bb05-2aaf3d48dd7b","Type":"ContainerDied","Data":"21d52ef021d95cb0ab795a31e95c055508d2dae358dd21513e887ddd94958623"} Jan 30 17:23:05 crc kubenswrapper[4712]: I0130 17:23:05.800772 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:23:05 crc kubenswrapper[4712]: E0130 17:23:05.801028 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:23:06 crc kubenswrapper[4712]: I0130 17:23:06.952349 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:23:06 crc kubenswrapper[4712]: I0130 17:23:06.966876 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcptw\" (UniqueName: \"kubernetes.io/projected/fd818085-3429-43ff-bb05-2aaf3d48dd7b-kube-api-access-wcptw\") pod \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " Jan 30 17:23:06 crc kubenswrapper[4712]: I0130 17:23:06.967080 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-ssh-key-openstack-edpm-ipam\") pod \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " Jan 30 17:23:06 crc kubenswrapper[4712]: I0130 17:23:06.967864 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-inventory\") pod \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\" (UID: \"fd818085-3429-43ff-bb05-2aaf3d48dd7b\") " Jan 30 17:23:06 crc kubenswrapper[4712]: I0130 17:23:06.986701 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd818085-3429-43ff-bb05-2aaf3d48dd7b-kube-api-access-wcptw" (OuterVolumeSpecName: "kube-api-access-wcptw") pod "fd818085-3429-43ff-bb05-2aaf3d48dd7b" (UID: "fd818085-3429-43ff-bb05-2aaf3d48dd7b"). InnerVolumeSpecName "kube-api-access-wcptw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.004010 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fd818085-3429-43ff-bb05-2aaf3d48dd7b" (UID: "fd818085-3429-43ff-bb05-2aaf3d48dd7b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.018507 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-inventory" (OuterVolumeSpecName: "inventory") pod "fd818085-3429-43ff-bb05-2aaf3d48dd7b" (UID: "fd818085-3429-43ff-bb05-2aaf3d48dd7b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.069260 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.069298 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcptw\" (UniqueName: \"kubernetes.io/projected/fd818085-3429-43ff-bb05-2aaf3d48dd7b-kube-api-access-wcptw\") on node \"crc\" DevicePath \"\"" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.069309 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fd818085-3429-43ff-bb05-2aaf3d48dd7b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.219357 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" event={"ID":"fd818085-3429-43ff-bb05-2aaf3d48dd7b","Type":"ContainerDied","Data":"c13d2eb4b98fb927e3943fceaaaec33c139ed3ba01d5acf00721634dff19113f"} Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.219394 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c13d2eb4b98fb927e3943fceaaaec33c139ed3ba01d5acf00721634dff19113f" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.219414 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-x2znn" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.304830 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs"] Jan 30 17:23:07 crc kubenswrapper[4712]: E0130 17:23:07.305341 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd818085-3429-43ff-bb05-2aaf3d48dd7b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.305368 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd818085-3429-43ff-bb05-2aaf3d48dd7b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.305583 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd818085-3429-43ff-bb05-2aaf3d48dd7b" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.306754 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.312282 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.312637 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.312835 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.323754 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.326423 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs"] Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.374075 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d7vw\" (UniqueName: \"kubernetes.io/projected/03922579-00da-4ea3-ba7e-efeb5062632f-kube-api-access-4d7vw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.374166 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.374237 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.374314 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.476044 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d7vw\" (UniqueName: \"kubernetes.io/projected/03922579-00da-4ea3-ba7e-efeb5062632f-kube-api-access-4d7vw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.476149 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.476224 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.476288 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.480607 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.480754 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.480845 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.498049 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d7vw\" (UniqueName: \"kubernetes.io/projected/03922579-00da-4ea3-ba7e-efeb5062632f-kube-api-access-4d7vw\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.628412 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.919954 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.920482 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:23:07 crc kubenswrapper[4712]: I0130 17:23:07.971935 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.170679 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs"] Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.182190 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.227660 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" event={"ID":"03922579-00da-4ea3-ba7e-efeb5062632f","Type":"ContainerStarted","Data":"1acfa54724d083d264c79748804a2a86bd4ad11aa918fbf25d923635cd05ab9a"} Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.278223 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bs7pg" Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.378688 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bs7pg"] Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.441769 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdgsh"] Jan 30 17:23:08 crc kubenswrapper[4712]: I0130 17:23:08.442083 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pdgsh" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="registry-server" containerID="cri-o://9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" gracePeriod=2 Jan 30 17:23:09 crc kubenswrapper[4712]: E0130 17:23:09.158958 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 17:23:09 crc kubenswrapper[4712]: E0130 17:23:09.160476 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 17:23:09 crc kubenswrapper[4712]: E0130 17:23:09.164997 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 17:23:09 crc kubenswrapper[4712]: E0130 17:23:09.165073 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/certified-operators-pdgsh" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="registry-server" Jan 30 17:23:09 crc kubenswrapper[4712]: I0130 17:23:09.247157 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" event={"ID":"03922579-00da-4ea3-ba7e-efeb5062632f","Type":"ContainerStarted","Data":"319d43080bb59c63e16525e009dd4e8b6cdfb0c78345b8c3b1eee64fe492795a"} Jan 30 17:23:09 crc kubenswrapper[4712]: I0130 17:23:09.278613 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" podStartSLOduration=1.791285592 podStartE2EDuration="2.278579166s" podCreationTimestamp="2026-01-30 17:23:07 +0000 UTC" firstStartedPulling="2026-01-30 17:23:08.180752938 +0000 UTC m=+1725.087762407" lastFinishedPulling="2026-01-30 17:23:08.668046512 +0000 UTC m=+1725.575055981" observedRunningTime="2026-01-30 17:23:09.272728175 +0000 UTC m=+1726.179737644" watchObservedRunningTime="2026-01-30 17:23:09.278579166 +0000 UTC m=+1726.185588635" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.059078 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pdgsh_150a284f-86ca-495d-ad65-096b9213b93a/registry-server/0.log" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.060834 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.162092 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snftf\" (UniqueName: \"kubernetes.io/projected/150a284f-86ca-495d-ad65-096b9213b93a-kube-api-access-snftf\") pod \"150a284f-86ca-495d-ad65-096b9213b93a\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.162249 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-catalog-content\") pod \"150a284f-86ca-495d-ad65-096b9213b93a\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.162294 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-utilities\") pod \"150a284f-86ca-495d-ad65-096b9213b93a\" (UID: \"150a284f-86ca-495d-ad65-096b9213b93a\") " Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.163199 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-utilities" (OuterVolumeSpecName: "utilities") pod "150a284f-86ca-495d-ad65-096b9213b93a" (UID: "150a284f-86ca-495d-ad65-096b9213b93a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.172328 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/150a284f-86ca-495d-ad65-096b9213b93a-kube-api-access-snftf" (OuterVolumeSpecName: "kube-api-access-snftf") pod "150a284f-86ca-495d-ad65-096b9213b93a" (UID: "150a284f-86ca-495d-ad65-096b9213b93a"). InnerVolumeSpecName "kube-api-access-snftf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.236607 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "150a284f-86ca-495d-ad65-096b9213b93a" (UID: "150a284f-86ca-495d-ad65-096b9213b93a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.263232 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pdgsh_150a284f-86ca-495d-ad65-096b9213b93a/registry-server/0.log" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.264024 4712 generic.go:334] "Generic (PLEG): container finished" podID="150a284f-86ca-495d-ad65-096b9213b93a" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" exitCode=137 Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.264063 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdgsh" event={"ID":"150a284f-86ca-495d-ad65-096b9213b93a","Type":"ContainerDied","Data":"9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1"} Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.264092 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pdgsh" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.264111 4712 scope.go:117] "RemoveContainer" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.264096 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pdgsh" event={"ID":"150a284f-86ca-495d-ad65-096b9213b93a","Type":"ContainerDied","Data":"7a59f362253d04578a82c4395546aa95c458ec796b769ca9a58bae0b099cfdb9"} Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.265616 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snftf\" (UniqueName: \"kubernetes.io/projected/150a284f-86ca-495d-ad65-096b9213b93a-kube-api-access-snftf\") on node \"crc\" DevicePath \"\"" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.274923 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.274969 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/150a284f-86ca-495d-ad65-096b9213b93a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.309954 4712 scope.go:117] "RemoveContainer" containerID="8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.344380 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pdgsh"] Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.352089 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pdgsh"] Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.397763 4712 scope.go:117] "RemoveContainer" containerID="cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.437943 4712 scope.go:117] "RemoveContainer" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" Jan 30 17:23:11 crc kubenswrapper[4712]: E0130 17:23:11.441916 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1\": container with ID starting with 9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1 not found: ID does not exist" containerID="9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.441958 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1"} err="failed to get container status \"9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1\": rpc error: code = NotFound desc = could not find container \"9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1\": container with ID starting with 9465ea723a4d3ef5ebe8acfd87c07170d0e8c0d9f08fb67735f7cafdf8d529b1 not found: ID does not exist" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.441988 4712 scope.go:117] "RemoveContainer" containerID="8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851" Jan 30 17:23:11 crc kubenswrapper[4712]: E0130 17:23:11.449917 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851\": container with ID starting with 8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851 not found: ID does not exist" containerID="8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.449961 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851"} err="failed to get container status \"8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851\": rpc error: code = NotFound desc = could not find container \"8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851\": container with ID starting with 8247f4938601d2f2e93ee5f451671e8b9f0441c5b131ca07311d9d4f0611b851 not found: ID does not exist" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.449987 4712 scope.go:117] "RemoveContainer" containerID="cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363" Jan 30 17:23:11 crc kubenswrapper[4712]: E0130 17:23:11.450780 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363\": container with ID starting with cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363 not found: ID does not exist" containerID="cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.450818 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363"} err="failed to get container status \"cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363\": rpc error: code = NotFound desc = could not find container \"cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363\": container with ID starting with cbe036c22f52068e17c400a44fe85d529cdb492c3e22e6f4463eb87d56007363 not found: ID does not exist" Jan 30 17:23:11 crc kubenswrapper[4712]: I0130 17:23:11.812280 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="150a284f-86ca-495d-ad65-096b9213b93a" path="/var/lib/kubelet/pods/150a284f-86ca-495d-ad65-096b9213b93a/volumes" Jan 30 17:23:18 crc kubenswrapper[4712]: I0130 17:23:18.799964 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:23:18 crc kubenswrapper[4712]: E0130 17:23:18.800726 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:23:31 crc kubenswrapper[4712]: I0130 17:23:31.800275 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:23:31 crc kubenswrapper[4712]: E0130 17:23:31.801242 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:23:36 crc kubenswrapper[4712]: I0130 17:23:36.478531 4712 scope.go:117] "RemoveContainer" containerID="4e80187a3b6c9283da731ffe5a293d4662eca7d098dad2dcd88a859869314be1" Jan 30 17:23:36 crc kubenswrapper[4712]: I0130 17:23:36.511000 4712 scope.go:117] "RemoveContainer" containerID="2abff2a39f69c92d6b6f1a7bd3de162fe1a94708d72b57a74c331880b4618230" Jan 30 17:23:36 crc kubenswrapper[4712]: I0130 17:23:36.535331 4712 scope.go:117] "RemoveContainer" containerID="41bb890082e2894c9e3d503a74b8fafda69c11b38b44180f090ea29485338140" Jan 30 17:23:36 crc kubenswrapper[4712]: I0130 17:23:36.564762 4712 scope.go:117] "RemoveContainer" containerID="7747f5be190ec75eb1e9bd4b2e5287e50b0b7f3283a8928f3616bcdef7e41c73" Jan 30 17:23:46 crc kubenswrapper[4712]: I0130 17:23:46.801008 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:23:46 crc kubenswrapper[4712]: E0130 17:23:46.801736 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:24:00 crc kubenswrapper[4712]: I0130 17:24:00.798978 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:24:00 crc kubenswrapper[4712]: E0130 17:24:00.799681 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:24:14 crc kubenswrapper[4712]: I0130 17:24:14.799898 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:24:14 crc kubenswrapper[4712]: E0130 17:24:14.800658 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.054125 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-wv76z"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.062982 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-55c7-account-create-update-kz29l"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.072612 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-kvjrp"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.083871 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-c85vb"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.094661 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-73dc-account-create-update-675c8"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.103647 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-be6c-account-create-update-x29l7"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.111463 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-c85vb"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.119993 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-kvjrp"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.129905 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-55c7-account-create-update-kz29l"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.138323 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-be6c-account-create-update-x29l7"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.146912 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-wv76z"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.154059 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-73dc-account-create-update-675c8"] Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.815808 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8" path="/var/lib/kubelet/pods/0ae6aebe-2ff5-42e3-bfd1-48b0b2b579c8/volumes" Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.816511 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a6d2018-2c94-4c5f-8a8a-03c69bfac444" path="/var/lib/kubelet/pods/3a6d2018-2c94-4c5f-8a8a-03c69bfac444/volumes" Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.817704 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e13e69b-0a9c-4100-a869-67d199b76f55" path="/var/lib/kubelet/pods/3e13e69b-0a9c-4100-a869-67d199b76f55/volumes" Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.818402 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40f78f2d-d7fe-4199-853a-b45c352c93a5" path="/var/lib/kubelet/pods/40f78f2d-d7fe-4199-853a-b45c352c93a5/volumes" Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.820069 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96165653-9d73-4013-afb2-f922fc4d1eed" path="/var/lib/kubelet/pods/96165653-9d73-4013-afb2-f922fc4d1eed/volumes" Jan 30 17:24:23 crc kubenswrapper[4712]: I0130 17:24:23.821363 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afdb21ea-b35a-4413-b25a-f8e0fcf10c13" path="/var/lib/kubelet/pods/afdb21ea-b35a-4413-b25a-f8e0fcf10c13/volumes" Jan 30 17:24:29 crc kubenswrapper[4712]: I0130 17:24:29.799607 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:24:29 crc kubenswrapper[4712]: E0130 17:24:29.800563 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.650083 4712 scope.go:117] "RemoveContainer" containerID="df0a7af201ffa9e1d7e8047915d9fdc09b3789563ee68287e2c4ef43a9ec650a" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.685686 4712 scope.go:117] "RemoveContainer" containerID="71faa3f91d5802f8b121e02f21a587237bcc60ee3b6089b689c551ae42bb7afe" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.724756 4712 scope.go:117] "RemoveContainer" containerID="517e6c5cc9aab763664b393dad4c13bef938cd575257263639432c3808c2c01f" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.764869 4712 scope.go:117] "RemoveContainer" containerID="5c423648df8ac34e33c902d64166e29dfe51de0b8347638f425a8fce7cdc5e66" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.808569 4712 scope.go:117] "RemoveContainer" containerID="d48b19835ff38127cc7b74972b89601c860cb22bb181c5259c48cde4d7f18bc6" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.848715 4712 scope.go:117] "RemoveContainer" containerID="a41ba1bfe995ca4f61a819c738c715dc7a7510b78fec850c9885e97c256a6365" Jan 30 17:24:36 crc kubenswrapper[4712]: I0130 17:24:36.898817 4712 scope.go:117] "RemoveContainer" containerID="84e344f6c464576030ecbde14be96325e966b006ad94b1a3323a65a08650dfdb" Jan 30 17:24:43 crc kubenswrapper[4712]: I0130 17:24:43.806048 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:24:43 crc kubenswrapper[4712]: E0130 17:24:43.807111 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:24:58 crc kubenswrapper[4712]: I0130 17:24:58.799439 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:24:58 crc kubenswrapper[4712]: E0130 17:24:58.800498 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:25:07 crc kubenswrapper[4712]: I0130 17:25:07.048664 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-q59jh"] Jan 30 17:25:07 crc kubenswrapper[4712]: I0130 17:25:07.057117 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-q59jh"] Jan 30 17:25:07 crc kubenswrapper[4712]: I0130 17:25:07.831904 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7426546b-0d60-4c6e-b888-c2293defc468" path="/var/lib/kubelet/pods/7426546b-0d60-4c6e-b888-c2293defc468/volumes" Jan 30 17:25:11 crc kubenswrapper[4712]: I0130 17:25:11.799553 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:25:11 crc kubenswrapper[4712]: E0130 17:25:11.800339 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.069093 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-7v96g"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.083200 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-sdcjd"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.102917 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8597-account-create-update-dktjd"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.124672 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-w277s"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.134636 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-2rr2s"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.144604 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-7v96g"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.154495 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-sdcjd"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.165181 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-w277s"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.175491 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-2rr2s"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.185468 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8597-account-create-update-dktjd"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.193665 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-zd4d8"] Jan 30 17:25:12 crc kubenswrapper[4712]: I0130 17:25:12.203356 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-zd4d8"] Jan 30 17:25:13 crc kubenswrapper[4712]: I0130 17:25:13.815322 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd" path="/var/lib/kubelet/pods/2bdbe95d-75db-4a6b-8204-0c9bdfc8f6bd/volumes" Jan 30 17:25:13 crc kubenswrapper[4712]: I0130 17:25:13.817542 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9adf62bc-41cc-4682-8943-b72859412ebc" path="/var/lib/kubelet/pods/9adf62bc-41cc-4682-8943-b72859412ebc/volumes" Jan 30 17:25:13 crc kubenswrapper[4712]: I0130 17:25:13.820748 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d0d92d-69bc-4285-98df-0f3bda502989" path="/var/lib/kubelet/pods/a6d0d92d-69bc-4285-98df-0f3bda502989/volumes" Jan 30 17:25:13 crc kubenswrapper[4712]: I0130 17:25:13.822828 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a71905e7-0e29-40df-8d89-4a9a15cf0079" path="/var/lib/kubelet/pods/a71905e7-0e29-40df-8d89-4a9a15cf0079/volumes" Jan 30 17:25:13 crc kubenswrapper[4712]: I0130 17:25:13.823584 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b36df6b0-4d60-47bd-a5e3-c8570fa81424" path="/var/lib/kubelet/pods/b36df6b0-4d60-47bd-a5e3-c8570fa81424/volumes" Jan 30 17:25:13 crc kubenswrapper[4712]: I0130 17:25:13.825754 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfc6cfd7-a3e2-4520-ac86-ff011cd96593" path="/var/lib/kubelet/pods/dfc6cfd7-a3e2-4520-ac86-ff011cd96593/volumes" Jan 30 17:25:18 crc kubenswrapper[4712]: I0130 17:25:18.038227 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-5890-account-create-update-n55qw"] Jan 30 17:25:18 crc kubenswrapper[4712]: I0130 17:25:18.051819 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7b37-account-create-update-mdbpm"] Jan 30 17:25:18 crc kubenswrapper[4712]: I0130 17:25:18.063242 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-6ee6-account-create-update-dl58q"] Jan 30 17:25:18 crc kubenswrapper[4712]: I0130 17:25:18.073442 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-6ee6-account-create-update-dl58q"] Jan 30 17:25:18 crc kubenswrapper[4712]: I0130 17:25:18.083492 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7b37-account-create-update-mdbpm"] Jan 30 17:25:18 crc kubenswrapper[4712]: I0130 17:25:18.093921 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-5890-account-create-update-n55qw"] Jan 30 17:25:19 crc kubenswrapper[4712]: I0130 17:25:19.811012 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01856653-57a6-4e16-810c-95e7cf57014f" path="/var/lib/kubelet/pods/01856653-57a6-4e16-810c-95e7cf57014f/volumes" Jan 30 17:25:19 crc kubenswrapper[4712]: I0130 17:25:19.813596 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="325fa6b1-02e6-4ef7-aa98-99a417a5178b" path="/var/lib/kubelet/pods/325fa6b1-02e6-4ef7-aa98-99a417a5178b/volumes" Jan 30 17:25:19 crc kubenswrapper[4712]: I0130 17:25:19.814896 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5df96043-da07-44d6-bd5e-f90001f55f1f" path="/var/lib/kubelet/pods/5df96043-da07-44d6-bd5e-f90001f55f1f/volumes" Jan 30 17:25:24 crc kubenswrapper[4712]: I0130 17:25:24.039173 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-97hwk"] Jan 30 17:25:24 crc kubenswrapper[4712]: I0130 17:25:24.045882 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-97hwk"] Jan 30 17:25:25 crc kubenswrapper[4712]: I0130 17:25:25.811740 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7607c458-cbb6-43d4-8a85-e631507e9d66" path="/var/lib/kubelet/pods/7607c458-cbb6-43d4-8a85-e631507e9d66/volumes" Jan 30 17:25:26 crc kubenswrapper[4712]: I0130 17:25:26.801379 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:25:26 crc kubenswrapper[4712]: E0130 17:25:26.802060 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.084550 4712 scope.go:117] "RemoveContainer" containerID="148a8ab19e12de20aeb4b7145ddbceda6f38491b4135f92fe2bd5f3a0553dd27" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.131817 4712 scope.go:117] "RemoveContainer" containerID="5358ca1f343981fb6b618413531a9590f90c9743b083d8eeac6cb4e9d1c4ccda" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.157398 4712 scope.go:117] "RemoveContainer" containerID="0141c288682731d5610bf03db69cbd77b3ddfb2e6249b05475bb4061caf2b297" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.204980 4712 scope.go:117] "RemoveContainer" containerID="a5d8fd67f1f0de8064669c7048566a7a4366f1e0b2e483d9191c923041b8fe19" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.250030 4712 scope.go:117] "RemoveContainer" containerID="847e223ef2a3a759dc94f6e7d2c41e9a894c98a0bd770cef7059211c1dc282f6" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.293278 4712 scope.go:117] "RemoveContainer" containerID="60c9308c0ed62adc024fd48aef67555cce594bb2843f360247f55e39397db1b0" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.336690 4712 scope.go:117] "RemoveContainer" containerID="96e758a07ebdd3575f4de3816c435903d2e17cf8c6ef90501540a3e5a431583f" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.356541 4712 scope.go:117] "RemoveContainer" containerID="7d9ebe2758317c93c69b5bc90b0d2af9f645bd05c2d96f112b2a027f42a6debc" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.375060 4712 scope.go:117] "RemoveContainer" containerID="28e96750c07fbc8c01b200ea3e91c04442cd43e0ff95f8c5447ff55cb81419be" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.393427 4712 scope.go:117] "RemoveContainer" containerID="ae1866255ee9d0c0b636a2048b70966260ae56843080505eabdd58b9aadc3b4d" Jan 30 17:25:37 crc kubenswrapper[4712]: I0130 17:25:37.414921 4712 scope.go:117] "RemoveContainer" containerID="6827db413ce501836609062f68853422a888dfeffcce0c2fca3c7ec9cc0b9452" Jan 30 17:25:39 crc kubenswrapper[4712]: I0130 17:25:39.801284 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:25:39 crc kubenswrapper[4712]: E0130 17:25:39.801751 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:25:53 crc kubenswrapper[4712]: I0130 17:25:53.800900 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:25:53 crc kubenswrapper[4712]: E0130 17:25:53.802038 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:26:06 crc kubenswrapper[4712]: I0130 17:26:06.800031 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:26:07 crc kubenswrapper[4712]: I0130 17:26:07.896704 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"6bff8f420280843f1dcba83eeb7d6607277904ab6cbb2965c10673f888b9f646"} Jan 30 17:26:14 crc kubenswrapper[4712]: I0130 17:26:14.049919 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-ldhgd"] Jan 30 17:26:14 crc kubenswrapper[4712]: I0130 17:26:14.083079 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-ldhgd"] Jan 30 17:26:15 crc kubenswrapper[4712]: I0130 17:26:15.810089 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67221ffc-37c6-458b-b4b4-26ef6e628c0b" path="/var/lib/kubelet/pods/67221ffc-37c6-458b-b4b4-26ef6e628c0b/volumes" Jan 30 17:26:27 crc kubenswrapper[4712]: I0130 17:26:27.048088 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-kmcjp"] Jan 30 17:26:27 crc kubenswrapper[4712]: I0130 17:26:27.060240 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-kmcjp"] Jan 30 17:26:27 crc kubenswrapper[4712]: I0130 17:26:27.837929 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="540ab89b-e7b1-4c3f-ad6d-535ecaa5870c" path="/var/lib/kubelet/pods/540ab89b-e7b1-4c3f-ad6d-535ecaa5870c/volumes" Jan 30 17:26:30 crc kubenswrapper[4712]: I0130 17:26:30.048939 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-p8pht"] Jan 30 17:26:30 crc kubenswrapper[4712]: I0130 17:26:30.064652 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-p8pht"] Jan 30 17:26:31 crc kubenswrapper[4712]: I0130 17:26:31.814575 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef70cf25-e984-4397-b60e-78199d8f41bf" path="/var/lib/kubelet/pods/ef70cf25-e984-4397-b60e-78199d8f41bf/volumes" Jan 30 17:26:34 crc kubenswrapper[4712]: I0130 17:26:34.109836 4712 generic.go:334] "Generic (PLEG): container finished" podID="03922579-00da-4ea3-ba7e-efeb5062632f" containerID="319d43080bb59c63e16525e009dd4e8b6cdfb0c78345b8c3b1eee64fe492795a" exitCode=0 Jan 30 17:26:34 crc kubenswrapper[4712]: I0130 17:26:34.110166 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" event={"ID":"03922579-00da-4ea3-ba7e-efeb5062632f","Type":"ContainerDied","Data":"319d43080bb59c63e16525e009dd4e8b6cdfb0c78345b8c3b1eee64fe492795a"} Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.625954 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.725909 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-inventory\") pod \"03922579-00da-4ea3-ba7e-efeb5062632f\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.725958 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d7vw\" (UniqueName: \"kubernetes.io/projected/03922579-00da-4ea3-ba7e-efeb5062632f-kube-api-access-4d7vw\") pod \"03922579-00da-4ea3-ba7e-efeb5062632f\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.726042 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-bootstrap-combined-ca-bundle\") pod \"03922579-00da-4ea3-ba7e-efeb5062632f\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.726214 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-ssh-key-openstack-edpm-ipam\") pod \"03922579-00da-4ea3-ba7e-efeb5062632f\" (UID: \"03922579-00da-4ea3-ba7e-efeb5062632f\") " Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.731202 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "03922579-00da-4ea3-ba7e-efeb5062632f" (UID: "03922579-00da-4ea3-ba7e-efeb5062632f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.749730 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03922579-00da-4ea3-ba7e-efeb5062632f-kube-api-access-4d7vw" (OuterVolumeSpecName: "kube-api-access-4d7vw") pod "03922579-00da-4ea3-ba7e-efeb5062632f" (UID: "03922579-00da-4ea3-ba7e-efeb5062632f"). InnerVolumeSpecName "kube-api-access-4d7vw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.758424 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "03922579-00da-4ea3-ba7e-efeb5062632f" (UID: "03922579-00da-4ea3-ba7e-efeb5062632f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.769593 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-inventory" (OuterVolumeSpecName: "inventory") pod "03922579-00da-4ea3-ba7e-efeb5062632f" (UID: "03922579-00da-4ea3-ba7e-efeb5062632f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.829266 4712 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.829294 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.829304 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/03922579-00da-4ea3-ba7e-efeb5062632f-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:35 crc kubenswrapper[4712]: I0130 17:26:35.829313 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d7vw\" (UniqueName: \"kubernetes.io/projected/03922579-00da-4ea3-ba7e-efeb5062632f-kube-api-access-4d7vw\") on node \"crc\" DevicePath \"\"" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.131722 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" event={"ID":"03922579-00da-4ea3-ba7e-efeb5062632f","Type":"ContainerDied","Data":"1acfa54724d083d264c79748804a2a86bd4ad11aa918fbf25d923635cd05ab9a"} Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.131772 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1acfa54724d083d264c79748804a2a86bd4ad11aa918fbf25d923635cd05ab9a" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.131854 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.215380 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz"] Jan 30 17:26:36 crc kubenswrapper[4712]: E0130 17:26:36.215880 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="extract-content" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.215899 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="extract-content" Jan 30 17:26:36 crc kubenswrapper[4712]: E0130 17:26:36.215916 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03922579-00da-4ea3-ba7e-efeb5062632f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.215924 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="03922579-00da-4ea3-ba7e-efeb5062632f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 17:26:36 crc kubenswrapper[4712]: E0130 17:26:36.215938 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="extract-utilities" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.215945 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="extract-utilities" Jan 30 17:26:36 crc kubenswrapper[4712]: E0130 17:26:36.215962 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="registry-server" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.215970 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="registry-server" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.216200 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="03922579-00da-4ea3-ba7e-efeb5062632f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.216228 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="150a284f-86ca-495d-ad65-096b9213b93a" containerName="registry-server" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.217027 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.221349 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.221776 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.222067 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.222319 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.227379 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz"] Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.337423 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v72f\" (UniqueName: \"kubernetes.io/projected/6628bf15-f827-4b97-a95e-7ad66750f5db-kube-api-access-9v72f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.337504 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.337535 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.439649 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v72f\" (UniqueName: \"kubernetes.io/projected/6628bf15-f827-4b97-a95e-7ad66750f5db-kube-api-access-9v72f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.439994 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.440137 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.447768 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.452309 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.457203 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v72f\" (UniqueName: \"kubernetes.io/projected/6628bf15-f827-4b97-a95e-7ad66750f5db-kube-api-access-9v72f\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:36 crc kubenswrapper[4712]: I0130 17:26:36.535894 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.063824 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz"] Jan 30 17:26:37 crc kubenswrapper[4712]: W0130 17:26:37.066975 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6628bf15_f827_4b97_a95e_7ad66750f5db.slice/crio-afa0858c410f62c605ee58993e8d70d3f4cb65415a791b841f6f88840df3abc8 WatchSource:0}: Error finding container afa0858c410f62c605ee58993e8d70d3f4cb65415a791b841f6f88840df3abc8: Status 404 returned error can't find the container with id afa0858c410f62c605ee58993e8d70d3f4cb65415a791b841f6f88840df3abc8 Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.140949 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" event={"ID":"6628bf15-f827-4b97-a95e-7ad66750f5db","Type":"ContainerStarted","Data":"afa0858c410f62c605ee58993e8d70d3f4cb65415a791b841f6f88840df3abc8"} Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.640303 4712 scope.go:117] "RemoveContainer" containerID="0283d97ca21689294cdb94dc91cf892fb5e87038cfc4771591e165d9e33aaff3" Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.678183 4712 scope.go:117] "RemoveContainer" containerID="f526d490a66a83ed7181076e7eb98322fd53568262094785b44fe65d4da82b1c" Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.739061 4712 scope.go:117] "RemoveContainer" containerID="ca02bb819317c75624ea19803cd6304052cb736df006dc13789eab4dbce0eeed" Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.800255 4712 scope.go:117] "RemoveContainer" containerID="04defd45460f80104ff8b937c03637087d21d2c8420a9aead154b75962cc56d8" Jan 30 17:26:37 crc kubenswrapper[4712]: I0130 17:26:37.828558 4712 scope.go:117] "RemoveContainer" containerID="aff21b13d905c3dcc1d105927345076671d1cf6986b7a1c1afe3b22e3961b9e2" Jan 30 17:26:38 crc kubenswrapper[4712]: I0130 17:26:38.153292 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" event={"ID":"6628bf15-f827-4b97-a95e-7ad66750f5db","Type":"ContainerStarted","Data":"f807b3fd08d706ce48ff6e1f55473421ca3ec900df4757bc7a514d72149deb0a"} Jan 30 17:26:44 crc kubenswrapper[4712]: I0130 17:26:44.033341 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" podStartSLOduration=7.576510659 podStartE2EDuration="8.033315177s" podCreationTimestamp="2026-01-30 17:26:36 +0000 UTC" firstStartedPulling="2026-01-30 17:26:37.071494769 +0000 UTC m=+1933.978504238" lastFinishedPulling="2026-01-30 17:26:37.528299287 +0000 UTC m=+1934.435308756" observedRunningTime="2026-01-30 17:26:38.174991076 +0000 UTC m=+1935.082000555" watchObservedRunningTime="2026-01-30 17:26:44.033315177 +0000 UTC m=+1940.940324676" Jan 30 17:26:44 crc kubenswrapper[4712]: I0130 17:26:44.036050 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-7krdw"] Jan 30 17:26:44 crc kubenswrapper[4712]: I0130 17:26:44.045679 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-7krdw"] Jan 30 17:26:45 crc kubenswrapper[4712]: I0130 17:26:45.814702 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c4a03a4-e80d-4605-990f-a242222558bb" path="/var/lib/kubelet/pods/6c4a03a4-e80d-4605-990f-a242222558bb/volumes" Jan 30 17:26:50 crc kubenswrapper[4712]: I0130 17:26:50.030379 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-9gcv2"] Jan 30 17:26:50 crc kubenswrapper[4712]: I0130 17:26:50.037474 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-9gcv2"] Jan 30 17:26:51 crc kubenswrapper[4712]: I0130 17:26:51.812129 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c24ed25-f06f-494d-9fd5-2077c052db31" path="/var/lib/kubelet/pods/3c24ed25-f06f-494d-9fd5-2077c052db31/volumes" Jan 30 17:26:53 crc kubenswrapper[4712]: I0130 17:26:53.028533 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-78jqx"] Jan 30 17:26:53 crc kubenswrapper[4712]: I0130 17:26:53.043069 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-78jqx"] Jan 30 17:26:53 crc kubenswrapper[4712]: I0130 17:26:53.820636 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ef9729d-cbbc-4354-98e4-a9e07651518e" path="/var/lib/kubelet/pods/2ef9729d-cbbc-4354-98e4-a9e07651518e/volumes" Jan 30 17:27:37 crc kubenswrapper[4712]: I0130 17:27:37.966382 4712 scope.go:117] "RemoveContainer" containerID="e264b53f3868c5a390c29891442008f13f5c8c52760ff372f2898b802d090802" Jan 30 17:27:38 crc kubenswrapper[4712]: I0130 17:27:38.005966 4712 scope.go:117] "RemoveContainer" containerID="781f6a5a40a5b3ee9028c8dbd3c9194eaa45a80c3b80beec710f1fe06b502320" Jan 30 17:27:38 crc kubenswrapper[4712]: I0130 17:27:38.055757 4712 scope.go:117] "RemoveContainer" containerID="7044a23a75fa9d1cbe45ab912580d60ec45c452b219704a72a61230af590edd6" Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.080715 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-hcvcv"] Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.099459 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-715d-account-create-update-jq2sd"] Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.119001 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-78xzk"] Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.140164 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-hcvcv"] Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.154620 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-715d-account-create-update-jq2sd"] Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.165193 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-78xzk"] Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.810568 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d41c7b-df2e-492f-8126-1baa68733039" path="/var/lib/kubelet/pods/11d41c7b-df2e-492f-8126-1baa68733039/volumes" Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.811458 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d5a16c6-950f-48e7-b74e-60e6b6292839" path="/var/lib/kubelet/pods/5d5a16c6-950f-48e7-b74e-60e6b6292839/volumes" Jan 30 17:28:09 crc kubenswrapper[4712]: I0130 17:28:09.812845 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9802a8ce-ca97-435d-b65a-1618358e986f" path="/var/lib/kubelet/pods/9802a8ce-ca97-435d-b65a-1618358e986f/volumes" Jan 30 17:28:10 crc kubenswrapper[4712]: I0130 17:28:10.051926 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-89d2-account-create-update-6kqnb"] Jan 30 17:28:10 crc kubenswrapper[4712]: I0130 17:28:10.071057 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xd6p5"] Jan 30 17:28:10 crc kubenswrapper[4712]: I0130 17:28:10.086269 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-89d2-account-create-update-6kqnb"] Jan 30 17:28:10 crc kubenswrapper[4712]: I0130 17:28:10.094930 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xd6p5"] Jan 30 17:28:11 crc kubenswrapper[4712]: I0130 17:28:11.025560 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-60a1-account-create-update-46xff"] Jan 30 17:28:11 crc kubenswrapper[4712]: I0130 17:28:11.033539 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-60a1-account-create-update-46xff"] Jan 30 17:28:11 crc kubenswrapper[4712]: I0130 17:28:11.810087 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a83c5d38-374a-4fd6-9f42-d4e39645b82a" path="/var/lib/kubelet/pods/a83c5d38-374a-4fd6-9f42-d4e39645b82a/volumes" Jan 30 17:28:11 crc kubenswrapper[4712]: I0130 17:28:11.810764 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96912dc-64a4-4735-91b2-ff0d019b8aa3" path="/var/lib/kubelet/pods/c96912dc-64a4-4735-91b2-ff0d019b8aa3/volumes" Jan 30 17:28:11 crc kubenswrapper[4712]: I0130 17:28:11.811456 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e500689a-cff5-4b5b-a031-a03709fb811d" path="/var/lib/kubelet/pods/e500689a-cff5-4b5b-a031-a03709fb811d/volumes" Jan 30 17:28:31 crc kubenswrapper[4712]: I0130 17:28:31.223984 4712 generic.go:334] "Generic (PLEG): container finished" podID="6628bf15-f827-4b97-a95e-7ad66750f5db" containerID="f807b3fd08d706ce48ff6e1f55473421ca3ec900df4757bc7a514d72149deb0a" exitCode=0 Jan 30 17:28:31 crc kubenswrapper[4712]: I0130 17:28:31.224074 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" event={"ID":"6628bf15-f827-4b97-a95e-7ad66750f5db","Type":"ContainerDied","Data":"f807b3fd08d706ce48ff6e1f55473421ca3ec900df4757bc7a514d72149deb0a"} Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.659170 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.765389 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-inventory\") pod \"6628bf15-f827-4b97-a95e-7ad66750f5db\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.765483 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-ssh-key-openstack-edpm-ipam\") pod \"6628bf15-f827-4b97-a95e-7ad66750f5db\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.765570 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v72f\" (UniqueName: \"kubernetes.io/projected/6628bf15-f827-4b97-a95e-7ad66750f5db-kube-api-access-9v72f\") pod \"6628bf15-f827-4b97-a95e-7ad66750f5db\" (UID: \"6628bf15-f827-4b97-a95e-7ad66750f5db\") " Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.770625 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6628bf15-f827-4b97-a95e-7ad66750f5db-kube-api-access-9v72f" (OuterVolumeSpecName: "kube-api-access-9v72f") pod "6628bf15-f827-4b97-a95e-7ad66750f5db" (UID: "6628bf15-f827-4b97-a95e-7ad66750f5db"). InnerVolumeSpecName "kube-api-access-9v72f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.801397 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-inventory" (OuterVolumeSpecName: "inventory") pod "6628bf15-f827-4b97-a95e-7ad66750f5db" (UID: "6628bf15-f827-4b97-a95e-7ad66750f5db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.810831 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "6628bf15-f827-4b97-a95e-7ad66750f5db" (UID: "6628bf15-f827-4b97-a95e-7ad66750f5db"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.868026 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.868071 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v72f\" (UniqueName: \"kubernetes.io/projected/6628bf15-f827-4b97-a95e-7ad66750f5db-kube-api-access-9v72f\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:32 crc kubenswrapper[4712]: I0130 17:28:32.868083 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/6628bf15-f827-4b97-a95e-7ad66750f5db-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.245487 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" event={"ID":"6628bf15-f827-4b97-a95e-7ad66750f5db","Type":"ContainerDied","Data":"afa0858c410f62c605ee58993e8d70d3f4cb65415a791b841f6f88840df3abc8"} Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.245525 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afa0858c410f62c605ee58993e8d70d3f4cb65415a791b841f6f88840df3abc8" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.245575 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.343619 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4"] Jan 30 17:28:33 crc kubenswrapper[4712]: E0130 17:28:33.343997 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6628bf15-f827-4b97-a95e-7ad66750f5db" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.344013 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6628bf15-f827-4b97-a95e-7ad66750f5db" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.344209 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="6628bf15-f827-4b97-a95e-7ad66750f5db" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.344818 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.348218 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.348364 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.348558 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.352258 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.366102 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4"] Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.482363 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.482511 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzgs9\" (UniqueName: \"kubernetes.io/projected/f19f0b0d-9323-44d3-9098-0b0e462f4015-kube-api-access-rzgs9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.482544 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.584041 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.584128 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzgs9\" (UniqueName: \"kubernetes.io/projected/f19f0b0d-9323-44d3-9098-0b0e462f4015-kube-api-access-rzgs9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.584165 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.592156 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.617663 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.619481 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzgs9\" (UniqueName: \"kubernetes.io/projected/f19f0b0d-9323-44d3-9098-0b0e462f4015-kube-api-access-rzgs9\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:33 crc kubenswrapper[4712]: I0130 17:28:33.670514 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:28:34 crc kubenswrapper[4712]: I0130 17:28:34.370260 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4"] Jan 30 17:28:34 crc kubenswrapper[4712]: I0130 17:28:34.389054 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:28:35 crc kubenswrapper[4712]: I0130 17:28:35.264926 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" event={"ID":"f19f0b0d-9323-44d3-9098-0b0e462f4015","Type":"ContainerStarted","Data":"54e0039d31092e810e8282f3d154eff2b7f4011f0d8286912c8d9e186d9c0363"} Jan 30 17:28:35 crc kubenswrapper[4712]: I0130 17:28:35.265281 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" event={"ID":"f19f0b0d-9323-44d3-9098-0b0e462f4015","Type":"ContainerStarted","Data":"ccfee89eefb444f12261ee6ffa7249a985bf799d9e3f2e2b0faa2907398db4d8"} Jan 30 17:28:35 crc kubenswrapper[4712]: I0130 17:28:35.289812 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" podStartSLOduration=1.810695805 podStartE2EDuration="2.28977347s" podCreationTimestamp="2026-01-30 17:28:33 +0000 UTC" firstStartedPulling="2026-01-30 17:28:34.38873166 +0000 UTC m=+2051.295741129" lastFinishedPulling="2026-01-30 17:28:34.867809325 +0000 UTC m=+2051.774818794" observedRunningTime="2026-01-30 17:28:35.283729313 +0000 UTC m=+2052.190738782" watchObservedRunningTime="2026-01-30 17:28:35.28977347 +0000 UTC m=+2052.196782939" Jan 30 17:28:36 crc kubenswrapper[4712]: I0130 17:28:36.271869 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:28:36 crc kubenswrapper[4712]: I0130 17:28:36.271934 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:28:38 crc kubenswrapper[4712]: I0130 17:28:38.153789 4712 scope.go:117] "RemoveContainer" containerID="e8a403e45589c10bd99804c2ffa645dab7bec60cdf9f427c91f2a72241356e18" Jan 30 17:28:38 crc kubenswrapper[4712]: I0130 17:28:38.188490 4712 scope.go:117] "RemoveContainer" containerID="4d4d19ab8ff53bb55033e0bc2a56db9628d47ba3610f77f75406fee9f1fadc76" Jan 30 17:28:38 crc kubenswrapper[4712]: I0130 17:28:38.240648 4712 scope.go:117] "RemoveContainer" containerID="f9597b3327b94333ca0b4e138b843698c5cb1c4a92db1a813842f407a5d1d8d7" Jan 30 17:28:38 crc kubenswrapper[4712]: I0130 17:28:38.298332 4712 scope.go:117] "RemoveContainer" containerID="635ba5d0f7f08932037ddc74a516445665ad1b86aad2e0c42bc70a0071655376" Jan 30 17:28:38 crc kubenswrapper[4712]: I0130 17:28:38.362335 4712 scope.go:117] "RemoveContainer" containerID="e39d4ac63eb0ecfd0af243192550a0794f079b76e188700544f8d29ed946c213" Jan 30 17:28:38 crc kubenswrapper[4712]: I0130 17:28:38.413408 4712 scope.go:117] "RemoveContainer" containerID="bb4b0e5720d1e9ed1dcc175c58957f32afd891c9f742c697864262c4a51c7e63" Jan 30 17:29:06 crc kubenswrapper[4712]: I0130 17:29:06.271368 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:29:06 crc kubenswrapper[4712]: I0130 17:29:06.273140 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:29:25 crc kubenswrapper[4712]: I0130 17:29:25.044542 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7mlh5"] Jan 30 17:29:25 crc kubenswrapper[4712]: I0130 17:29:25.057085 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-7mlh5"] Jan 30 17:29:25 crc kubenswrapper[4712]: I0130 17:29:25.822543 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93b068f3-6243-416f-b7d5-4d0eaff334cf" path="/var/lib/kubelet/pods/93b068f3-6243-416f-b7d5-4d0eaff334cf/volumes" Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.271638 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.272185 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.272236 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.273022 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6bff8f420280843f1dcba83eeb7d6607277904ab6cbb2965c10673f888b9f646"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.273087 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://6bff8f420280843f1dcba83eeb7d6607277904ab6cbb2965c10673f888b9f646" gracePeriod=600 Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.847215 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="6bff8f420280843f1dcba83eeb7d6607277904ab6cbb2965c10673f888b9f646" exitCode=0 Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.847298 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"6bff8f420280843f1dcba83eeb7d6607277904ab6cbb2965c10673f888b9f646"} Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.847581 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d"} Jan 30 17:29:36 crc kubenswrapper[4712]: I0130 17:29:36.847709 4712 scope.go:117] "RemoveContainer" containerID="261650ec2eba774e22792202e452b7b775a9451e27eb25c95a23e9f7a7f63330" Jan 30 17:29:38 crc kubenswrapper[4712]: I0130 17:29:38.627405 4712 scope.go:117] "RemoveContainer" containerID="a654632715e5ae6e76f3e63bb9ef2c566815550772d17f51224f70eb5b9e515b" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.608849 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t6mf6"] Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.611624 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.635144 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6mf6"] Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.744637 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-catalog-content\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.744780 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-utilities\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.744930 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsf2z\" (UniqueName: \"kubernetes.io/projected/ff320889-d8d3-4c5d-90cc-f8e655996a5c-kube-api-access-fsf2z\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.846767 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-utilities\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.846943 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsf2z\" (UniqueName: \"kubernetes.io/projected/ff320889-d8d3-4c5d-90cc-f8e655996a5c-kube-api-access-fsf2z\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.847052 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-catalog-content\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.847331 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-utilities\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.847493 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-catalog-content\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.882928 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsf2z\" (UniqueName: \"kubernetes.io/projected/ff320889-d8d3-4c5d-90cc-f8e655996a5c-kube-api-access-fsf2z\") pod \"redhat-operators-t6mf6\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:46 crc kubenswrapper[4712]: I0130 17:29:46.930846 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:29:47 crc kubenswrapper[4712]: I0130 17:29:47.449941 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t6mf6"] Jan 30 17:29:47 crc kubenswrapper[4712]: I0130 17:29:47.949674 4712 generic.go:334] "Generic (PLEG): container finished" podID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerID="2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a" exitCode=0 Jan 30 17:29:47 crc kubenswrapper[4712]: I0130 17:29:47.949724 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerDied","Data":"2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a"} Jan 30 17:29:47 crc kubenswrapper[4712]: I0130 17:29:47.949754 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerStarted","Data":"4c5d821d62b7a6daa52b8f6084a5a4b7cc0a0bebf21efefe4009948e748816a2"} Jan 30 17:29:48 crc kubenswrapper[4712]: I0130 17:29:48.977929 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerStarted","Data":"ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1"} Jan 30 17:29:54 crc kubenswrapper[4712]: I0130 17:29:54.032132 4712 generic.go:334] "Generic (PLEG): container finished" podID="f19f0b0d-9323-44d3-9098-0b0e462f4015" containerID="54e0039d31092e810e8282f3d154eff2b7f4011f0d8286912c8d9e186d9c0363" exitCode=0 Jan 30 17:29:54 crc kubenswrapper[4712]: I0130 17:29:54.032309 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" event={"ID":"f19f0b0d-9323-44d3-9098-0b0e462f4015","Type":"ContainerDied","Data":"54e0039d31092e810e8282f3d154eff2b7f4011f0d8286912c8d9e186d9c0363"} Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.475655 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.515956 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzgs9\" (UniqueName: \"kubernetes.io/projected/f19f0b0d-9323-44d3-9098-0b0e462f4015-kube-api-access-rzgs9\") pod \"f19f0b0d-9323-44d3-9098-0b0e462f4015\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.516079 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-inventory\") pod \"f19f0b0d-9323-44d3-9098-0b0e462f4015\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.516261 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-ssh-key-openstack-edpm-ipam\") pod \"f19f0b0d-9323-44d3-9098-0b0e462f4015\" (UID: \"f19f0b0d-9323-44d3-9098-0b0e462f4015\") " Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.524255 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19f0b0d-9323-44d3-9098-0b0e462f4015-kube-api-access-rzgs9" (OuterVolumeSpecName: "kube-api-access-rzgs9") pod "f19f0b0d-9323-44d3-9098-0b0e462f4015" (UID: "f19f0b0d-9323-44d3-9098-0b0e462f4015"). InnerVolumeSpecName "kube-api-access-rzgs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.559116 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-inventory" (OuterVolumeSpecName: "inventory") pod "f19f0b0d-9323-44d3-9098-0b0e462f4015" (UID: "f19f0b0d-9323-44d3-9098-0b0e462f4015"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.560923 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f19f0b0d-9323-44d3-9098-0b0e462f4015" (UID: "f19f0b0d-9323-44d3-9098-0b0e462f4015"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.619251 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzgs9\" (UniqueName: \"kubernetes.io/projected/f19f0b0d-9323-44d3-9098-0b0e462f4015-kube-api-access-rzgs9\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.619311 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:55 crc kubenswrapper[4712]: I0130 17:29:55.619322 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f19f0b0d-9323-44d3-9098-0b0e462f4015-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.055692 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" event={"ID":"f19f0b0d-9323-44d3-9098-0b0e462f4015","Type":"ContainerDied","Data":"ccfee89eefb444f12261ee6ffa7249a985bf799d9e3f2e2b0faa2907398db4d8"} Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.055744 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ccfee89eefb444f12261ee6ffa7249a985bf799d9e3f2e2b0faa2907398db4d8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.055779 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.061190 4712 generic.go:334] "Generic (PLEG): container finished" podID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerID="ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1" exitCode=0 Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.061240 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerDied","Data":"ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1"} Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.073549 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-x27hm"] Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.102718 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-x27hm"] Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.148748 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8"] Jan 30 17:29:56 crc kubenswrapper[4712]: E0130 17:29:56.149165 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19f0b0d-9323-44d3-9098-0b0e462f4015" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.149183 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19f0b0d-9323-44d3-9098-0b0e462f4015" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.149393 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19f0b0d-9323-44d3-9098-0b0e462f4015" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.150100 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.153093 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.153661 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.153862 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.153964 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.178170 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8"] Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.331749 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d454\" (UniqueName: \"kubernetes.io/projected/96e8f776-9933-4f80-91dd-fefa02de47ec-kube-api-access-8d454\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.331841 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.331927 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.436601 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d454\" (UniqueName: \"kubernetes.io/projected/96e8f776-9933-4f80-91dd-fefa02de47ec-kube-api-access-8d454\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.436665 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.436733 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.442696 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.443275 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.451465 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d454\" (UniqueName: \"kubernetes.io/projected/96e8f776-9933-4f80-91dd-fefa02de47ec-kube-api-access-8d454\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:56 crc kubenswrapper[4712]: I0130 17:29:56.464835 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:29:57 crc kubenswrapper[4712]: I0130 17:29:57.016908 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8"] Jan 30 17:29:57 crc kubenswrapper[4712]: W0130 17:29:57.032566 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96e8f776_9933_4f80_91dd_fefa02de47ec.slice/crio-61449e6d210e8a75f91b0d3a96191ef8267d95d1d0933bb1da1592ef08951663 WatchSource:0}: Error finding container 61449e6d210e8a75f91b0d3a96191ef8267d95d1d0933bb1da1592ef08951663: Status 404 returned error can't find the container with id 61449e6d210e8a75f91b0d3a96191ef8267d95d1d0933bb1da1592ef08951663 Jan 30 17:29:57 crc kubenswrapper[4712]: I0130 17:29:57.071903 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" event={"ID":"96e8f776-9933-4f80-91dd-fefa02de47ec","Type":"ContainerStarted","Data":"61449e6d210e8a75f91b0d3a96191ef8267d95d1d0933bb1da1592ef08951663"} Jan 30 17:29:57 crc kubenswrapper[4712]: I0130 17:29:57.074311 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerStarted","Data":"e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa"} Jan 30 17:29:57 crc kubenswrapper[4712]: I0130 17:29:57.101270 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t6mf6" podStartSLOduration=2.534563725 podStartE2EDuration="11.101249974s" podCreationTimestamp="2026-01-30 17:29:46 +0000 UTC" firstStartedPulling="2026-01-30 17:29:47.957957087 +0000 UTC m=+2124.864966556" lastFinishedPulling="2026-01-30 17:29:56.524643346 +0000 UTC m=+2133.431652805" observedRunningTime="2026-01-30 17:29:57.097675033 +0000 UTC m=+2134.004684502" watchObservedRunningTime="2026-01-30 17:29:57.101249974 +0000 UTC m=+2134.008259443" Jan 30 17:29:57 crc kubenswrapper[4712]: I0130 17:29:57.810194 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f564ed01-d852-40b5-853f-f79a37a114dc" path="/var/lib/kubelet/pods/f564ed01-d852-40b5-853f-f79a37a114dc/volumes" Jan 30 17:29:58 crc kubenswrapper[4712]: I0130 17:29:58.042659 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwz8"] Jan 30 17:29:58 crc kubenswrapper[4712]: I0130 17:29:58.055216 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-5cwz8"] Jan 30 17:29:58 crc kubenswrapper[4712]: I0130 17:29:58.084905 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" event={"ID":"96e8f776-9933-4f80-91dd-fefa02de47ec","Type":"ContainerStarted","Data":"71b32b2a5ab47f22756a3c317f60dd98b766e8c47882a382b080eedcf9de9340"} Jan 30 17:29:58 crc kubenswrapper[4712]: I0130 17:29:58.105481 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" podStartSLOduration=1.6638486989999999 podStartE2EDuration="2.105457995s" podCreationTimestamp="2026-01-30 17:29:56 +0000 UTC" firstStartedPulling="2026-01-30 17:29:57.034535206 +0000 UTC m=+2133.941544675" lastFinishedPulling="2026-01-30 17:29:57.476144502 +0000 UTC m=+2134.383153971" observedRunningTime="2026-01-30 17:29:58.101783893 +0000 UTC m=+2135.008793362" watchObservedRunningTime="2026-01-30 17:29:58.105457995 +0000 UTC m=+2135.012467474" Jan 30 17:29:59 crc kubenswrapper[4712]: I0130 17:29:59.813111 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3709de8-50e0-480b-a152-ee1875e8ff4f" path="/var/lib/kubelet/pods/a3709de8-50e0-480b-a152-ee1875e8ff4f/volumes" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.152490 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59"] Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.153711 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.156705 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.156774 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.163724 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59"] Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.218809 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdadb9b1-d191-4f65-980f-c8681e9981d4-config-volume\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.218847 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss5bn\" (UniqueName: \"kubernetes.io/projected/fdadb9b1-d191-4f65-980f-c8681e9981d4-kube-api-access-ss5bn\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.218875 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdadb9b1-d191-4f65-980f-c8681e9981d4-secret-volume\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.321132 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdadb9b1-d191-4f65-980f-c8681e9981d4-config-volume\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.321495 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss5bn\" (UniqueName: \"kubernetes.io/projected/fdadb9b1-d191-4f65-980f-c8681e9981d4-kube-api-access-ss5bn\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.321541 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdadb9b1-d191-4f65-980f-c8681e9981d4-secret-volume\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.321968 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdadb9b1-d191-4f65-980f-c8681e9981d4-config-volume\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.329497 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdadb9b1-d191-4f65-980f-c8681e9981d4-secret-volume\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.354299 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss5bn\" (UniqueName: \"kubernetes.io/projected/fdadb9b1-d191-4f65-980f-c8681e9981d4-kube-api-access-ss5bn\") pod \"collect-profiles-29496570-qjd59\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:00 crc kubenswrapper[4712]: I0130 17:30:00.475998 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:01 crc kubenswrapper[4712]: I0130 17:30:01.033163 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59"] Jan 30 17:30:01 crc kubenswrapper[4712]: I0130 17:30:01.135510 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" event={"ID":"fdadb9b1-d191-4f65-980f-c8681e9981d4","Type":"ContainerStarted","Data":"e28b2a84fc337bf0f3dd7d55e8c8f946318367586e2dc30b2f4b165062dc1e4a"} Jan 30 17:30:02 crc kubenswrapper[4712]: I0130 17:30:02.148686 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" event={"ID":"fdadb9b1-d191-4f65-980f-c8681e9981d4","Type":"ContainerStarted","Data":"2b1534ef9dbd422de81819a030f5230f577f951672338f34c06d021d9afd453d"} Jan 30 17:30:02 crc kubenswrapper[4712]: I0130 17:30:02.180710 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" podStartSLOduration=2.1806885830000002 podStartE2EDuration="2.180688583s" podCreationTimestamp="2026-01-30 17:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:30:02.178138947 +0000 UTC m=+2139.085148416" watchObservedRunningTime="2026-01-30 17:30:02.180688583 +0000 UTC m=+2139.087698052" Jan 30 17:30:03 crc kubenswrapper[4712]: I0130 17:30:03.157919 4712 generic.go:334] "Generic (PLEG): container finished" podID="fdadb9b1-d191-4f65-980f-c8681e9981d4" containerID="2b1534ef9dbd422de81819a030f5230f577f951672338f34c06d021d9afd453d" exitCode=0 Jan 30 17:30:03 crc kubenswrapper[4712]: I0130 17:30:03.158002 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" event={"ID":"fdadb9b1-d191-4f65-980f-c8681e9981d4","Type":"ContainerDied","Data":"2b1534ef9dbd422de81819a030f5230f577f951672338f34c06d021d9afd453d"} Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.600888 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.708451 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdadb9b1-d191-4f65-980f-c8681e9981d4-secret-volume\") pod \"fdadb9b1-d191-4f65-980f-c8681e9981d4\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.708599 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ss5bn\" (UniqueName: \"kubernetes.io/projected/fdadb9b1-d191-4f65-980f-c8681e9981d4-kube-api-access-ss5bn\") pod \"fdadb9b1-d191-4f65-980f-c8681e9981d4\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.708718 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdadb9b1-d191-4f65-980f-c8681e9981d4-config-volume\") pod \"fdadb9b1-d191-4f65-980f-c8681e9981d4\" (UID: \"fdadb9b1-d191-4f65-980f-c8681e9981d4\") " Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.709709 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fdadb9b1-d191-4f65-980f-c8681e9981d4-config-volume" (OuterVolumeSpecName: "config-volume") pod "fdadb9b1-d191-4f65-980f-c8681e9981d4" (UID: "fdadb9b1-d191-4f65-980f-c8681e9981d4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.723095 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdadb9b1-d191-4f65-980f-c8681e9981d4-kube-api-access-ss5bn" (OuterVolumeSpecName: "kube-api-access-ss5bn") pod "fdadb9b1-d191-4f65-980f-c8681e9981d4" (UID: "fdadb9b1-d191-4f65-980f-c8681e9981d4"). InnerVolumeSpecName "kube-api-access-ss5bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.725772 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fdadb9b1-d191-4f65-980f-c8681e9981d4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "fdadb9b1-d191-4f65-980f-c8681e9981d4" (UID: "fdadb9b1-d191-4f65-980f-c8681e9981d4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.810689 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ss5bn\" (UniqueName: \"kubernetes.io/projected/fdadb9b1-d191-4f65-980f-c8681e9981d4-kube-api-access-ss5bn\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.810745 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fdadb9b1-d191-4f65-980f-c8681e9981d4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:04 crc kubenswrapper[4712]: I0130 17:30:04.810754 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/fdadb9b1-d191-4f65-980f-c8681e9981d4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.177276 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" event={"ID":"fdadb9b1-d191-4f65-980f-c8681e9981d4","Type":"ContainerDied","Data":"e28b2a84fc337bf0f3dd7d55e8c8f946318367586e2dc30b2f4b165062dc1e4a"} Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.177583 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e28b2a84fc337bf0f3dd7d55e8c8f946318367586e2dc30b2f4b165062dc1e4a" Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.177652 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59" Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.196371 4712 generic.go:334] "Generic (PLEG): container finished" podID="96e8f776-9933-4f80-91dd-fefa02de47ec" containerID="71b32b2a5ab47f22756a3c317f60dd98b766e8c47882a382b080eedcf9de9340" exitCode=0 Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.196407 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" event={"ID":"96e8f776-9933-4f80-91dd-fefa02de47ec","Type":"ContainerDied","Data":"71b32b2a5ab47f22756a3c317f60dd98b766e8c47882a382b080eedcf9de9340"} Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.254563 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm"] Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.262962 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496525-j85bm"] Jan 30 17:30:05 crc kubenswrapper[4712]: I0130 17:30:05.812195 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34" path="/var/lib/kubelet/pods/4d6e15bd-47ce-4d0b-804b-a5d3df9d9e34/volumes" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.652780 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.846276 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-inventory\") pod \"96e8f776-9933-4f80-91dd-fefa02de47ec\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.846379 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d454\" (UniqueName: \"kubernetes.io/projected/96e8f776-9933-4f80-91dd-fefa02de47ec-kube-api-access-8d454\") pod \"96e8f776-9933-4f80-91dd-fefa02de47ec\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.846418 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-ssh-key-openstack-edpm-ipam\") pod \"96e8f776-9933-4f80-91dd-fefa02de47ec\" (UID: \"96e8f776-9933-4f80-91dd-fefa02de47ec\") " Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.852063 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e8f776-9933-4f80-91dd-fefa02de47ec-kube-api-access-8d454" (OuterVolumeSpecName: "kube-api-access-8d454") pod "96e8f776-9933-4f80-91dd-fefa02de47ec" (UID: "96e8f776-9933-4f80-91dd-fefa02de47ec"). InnerVolumeSpecName "kube-api-access-8d454". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.878112 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96e8f776-9933-4f80-91dd-fefa02de47ec" (UID: "96e8f776-9933-4f80-91dd-fefa02de47ec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.887984 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-inventory" (OuterVolumeSpecName: "inventory") pod "96e8f776-9933-4f80-91dd-fefa02de47ec" (UID: "96e8f776-9933-4f80-91dd-fefa02de47ec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.930981 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.935069 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.950154 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.950187 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d454\" (UniqueName: \"kubernetes.io/projected/96e8f776-9933-4f80-91dd-fefa02de47ec-kube-api-access-8d454\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:06 crc kubenswrapper[4712]: I0130 17:30:06.950198 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e8f776-9933-4f80-91dd-fefa02de47ec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.212770 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" event={"ID":"96e8f776-9933-4f80-91dd-fefa02de47ec","Type":"ContainerDied","Data":"61449e6d210e8a75f91b0d3a96191ef8267d95d1d0933bb1da1592ef08951663"} Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.212854 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61449e6d210e8a75f91b0d3a96191ef8267d95d1d0933bb1da1592ef08951663" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.212882 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.421244 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h"] Jan 30 17:30:07 crc kubenswrapper[4712]: E0130 17:30:07.421865 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdadb9b1-d191-4f65-980f-c8681e9981d4" containerName="collect-profiles" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.421944 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdadb9b1-d191-4f65-980f-c8681e9981d4" containerName="collect-profiles" Jan 30 17:30:07 crc kubenswrapper[4712]: E0130 17:30:07.422019 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e8f776-9933-4f80-91dd-fefa02de47ec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.422072 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e8f776-9933-4f80-91dd-fefa02de47ec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.422305 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdadb9b1-d191-4f65-980f-c8681e9981d4" containerName="collect-profiles" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.422368 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e8f776-9933-4f80-91dd-fefa02de47ec" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.423033 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.425555 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.425920 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.425936 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.428246 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.483242 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h"] Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.560779 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.560943 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.561010 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5vbr\" (UniqueName: \"kubernetes.io/projected/818160cb-c862-4860-8549-af66d60827c1-kube-api-access-m5vbr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.662687 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5vbr\" (UniqueName: \"kubernetes.io/projected/818160cb-c862-4860-8549-af66d60827c1-kube-api-access-m5vbr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.663086 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.663216 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.666828 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.666912 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.684788 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5vbr\" (UniqueName: \"kubernetes.io/projected/818160cb-c862-4860-8549-af66d60827c1-kube-api-access-m5vbr\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k7x2h\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:07 crc kubenswrapper[4712]: I0130 17:30:07.738974 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:08 crc kubenswrapper[4712]: I0130 17:30:08.011827 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6mf6" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" probeResult="failure" output=< Jan 30 17:30:08 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:30:08 crc kubenswrapper[4712]: > Jan 30 17:30:08 crc kubenswrapper[4712]: I0130 17:30:08.275054 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h"] Jan 30 17:30:09 crc kubenswrapper[4712]: I0130 17:30:09.231125 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" event={"ID":"818160cb-c862-4860-8549-af66d60827c1","Type":"ContainerStarted","Data":"0e3325913d47fbdce5747143ea775e119d454174aed3d20593f0f050992ad060"} Jan 30 17:30:09 crc kubenswrapper[4712]: I0130 17:30:09.231558 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" event={"ID":"818160cb-c862-4860-8549-af66d60827c1","Type":"ContainerStarted","Data":"98f40b74234fbe6941fe7521ba7b29c3b4ed1e906cee779ae2f005087dae1b77"} Jan 30 17:30:09 crc kubenswrapper[4712]: I0130 17:30:09.263486 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" podStartSLOduration=1.8538387520000001 podStartE2EDuration="2.263459515s" podCreationTimestamp="2026-01-30 17:30:07 +0000 UTC" firstStartedPulling="2026-01-30 17:30:08.281574411 +0000 UTC m=+2145.188583870" lastFinishedPulling="2026-01-30 17:30:08.691195164 +0000 UTC m=+2145.598204633" observedRunningTime="2026-01-30 17:30:09.251474967 +0000 UTC m=+2146.158484436" watchObservedRunningTime="2026-01-30 17:30:09.263459515 +0000 UTC m=+2146.170468984" Jan 30 17:30:17 crc kubenswrapper[4712]: I0130 17:30:17.988112 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6mf6" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" probeResult="failure" output=< Jan 30 17:30:17 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:30:17 crc kubenswrapper[4712]: > Jan 30 17:30:27 crc kubenswrapper[4712]: I0130 17:30:27.157390 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 17:30:27 crc kubenswrapper[4712]: I0130 17:30:27.849335 4712 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.886788511s: [/var/lib/containers/storage/overlay/03c4f2ce78b91ae55779f7bf796f07631e061459aa4e2a0ed869d1a7cba5c825/diff /var/log/pods/openstack_horizon-64655dbc44-pvj2c_6a28b495-ecf0-409e-9558-ee794a46dbd1/horizon/4.log]; will not log again for this container unless duration exceeds 2s Jan 30 17:30:28 crc kubenswrapper[4712]: I0130 17:30:28.160695 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="220f56ca-28d1-4856-98cc-e420bd3cce95" containerName="ovsdbserver-sb" probeResult="failure" output="command timed out" Jan 30 17:30:28 crc kubenswrapper[4712]: I0130 17:30:28.163444 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 17:30:28 crc kubenswrapper[4712]: I0130 17:30:28.167290 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 17:30:28 crc kubenswrapper[4712]: I0130 17:30:28.873491 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6mf6" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" probeResult="failure" output=< Jan 30 17:30:28 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:30:28 crc kubenswrapper[4712]: > Jan 30 17:30:37 crc kubenswrapper[4712]: I0130 17:30:37.979621 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6mf6" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" probeResult="failure" output=< Jan 30 17:30:37 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:30:37 crc kubenswrapper[4712]: > Jan 30 17:30:38 crc kubenswrapper[4712]: I0130 17:30:38.680718 4712 scope.go:117] "RemoveContainer" containerID="e08e7e3b6c7048825af07cabe437defaa33ec9144b1d809c042d234f5077e3d3" Jan 30 17:30:38 crc kubenswrapper[4712]: I0130 17:30:38.715117 4712 scope.go:117] "RemoveContainer" containerID="78c4784ad9e9faa1515784fcb4af25f8615ff854c01fa1f7e97e5a378b0ed106" Jan 30 17:30:38 crc kubenswrapper[4712]: I0130 17:30:38.782392 4712 scope.go:117] "RemoveContainer" containerID="00765a0091717570fdc4d176c373c4eca61c407007da373392d6a2c9630ac64b" Jan 30 17:30:39 crc kubenswrapper[4712]: I0130 17:30:39.047902 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-tm2l4"] Jan 30 17:30:39 crc kubenswrapper[4712]: I0130 17:30:39.057323 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-tm2l4"] Jan 30 17:30:39 crc kubenswrapper[4712]: I0130 17:30:39.810111 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e1f6ad-a075-4777-a81a-d021d3b25b37" path="/var/lib/kubelet/pods/07e1f6ad-a075-4777-a81a-d021d3b25b37/volumes" Jan 30 17:30:47 crc kubenswrapper[4712]: I0130 17:30:47.975983 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t6mf6" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" probeResult="failure" output=< Jan 30 17:30:47 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:30:47 crc kubenswrapper[4712]: > Jan 30 17:30:56 crc kubenswrapper[4712]: I0130 17:30:56.646943 4712 generic.go:334] "Generic (PLEG): container finished" podID="818160cb-c862-4860-8549-af66d60827c1" containerID="0e3325913d47fbdce5747143ea775e119d454174aed3d20593f0f050992ad060" exitCode=0 Jan 30 17:30:56 crc kubenswrapper[4712]: I0130 17:30:56.646974 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" event={"ID":"818160cb-c862-4860-8549-af66d60827c1","Type":"ContainerDied","Data":"0e3325913d47fbdce5747143ea775e119d454174aed3d20593f0f050992ad060"} Jan 30 17:30:56 crc kubenswrapper[4712]: I0130 17:30:56.994213 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:30:57 crc kubenswrapper[4712]: I0130 17:30:57.042980 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:30:57 crc kubenswrapper[4712]: I0130 17:30:57.694992 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6mf6"] Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.221486 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.423840 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5vbr\" (UniqueName: \"kubernetes.io/projected/818160cb-c862-4860-8549-af66d60827c1-kube-api-access-m5vbr\") pod \"818160cb-c862-4860-8549-af66d60827c1\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.424416 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-ssh-key-openstack-edpm-ipam\") pod \"818160cb-c862-4860-8549-af66d60827c1\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.425309 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-inventory\") pod \"818160cb-c862-4860-8549-af66d60827c1\" (UID: \"818160cb-c862-4860-8549-af66d60827c1\") " Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.439165 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/818160cb-c862-4860-8549-af66d60827c1-kube-api-access-m5vbr" (OuterVolumeSpecName: "kube-api-access-m5vbr") pod "818160cb-c862-4860-8549-af66d60827c1" (UID: "818160cb-c862-4860-8549-af66d60827c1"). InnerVolumeSpecName "kube-api-access-m5vbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.453847 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "818160cb-c862-4860-8549-af66d60827c1" (UID: "818160cb-c862-4860-8549-af66d60827c1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.457845 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-inventory" (OuterVolumeSpecName: "inventory") pod "818160cb-c862-4860-8549-af66d60827c1" (UID: "818160cb-c862-4860-8549-af66d60827c1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.528197 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m5vbr\" (UniqueName: \"kubernetes.io/projected/818160cb-c862-4860-8549-af66d60827c1-kube-api-access-m5vbr\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.528239 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.528253 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/818160cb-c862-4860-8549-af66d60827c1-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.667380 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.667378 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k7x2h" event={"ID":"818160cb-c862-4860-8549-af66d60827c1","Type":"ContainerDied","Data":"98f40b74234fbe6941fe7521ba7b29c3b4ed1e906cee779ae2f005087dae1b77"} Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.667421 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98f40b74234fbe6941fe7521ba7b29c3b4ed1e906cee779ae2f005087dae1b77" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.667581 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t6mf6" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" containerID="cri-o://e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa" gracePeriod=2 Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.773845 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz"] Jan 30 17:30:58 crc kubenswrapper[4712]: E0130 17:30:58.776216 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="818160cb-c862-4860-8549-af66d60827c1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.776256 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="818160cb-c862-4860-8549-af66d60827c1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.776704 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="818160cb-c862-4860-8549-af66d60827c1" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.777581 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.781309 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.781472 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.781537 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.783678 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.797487 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz"] Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.933807 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.934238 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-72shx\" (UniqueName: \"kubernetes.io/projected/900e21ae-3c90-4e70-90e5-fbe81a902929-kube-api-access-72shx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:58 crc kubenswrapper[4712]: I0130 17:30:58.934316 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.037863 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-72shx\" (UniqueName: \"kubernetes.io/projected/900e21ae-3c90-4e70-90e5-fbe81a902929-kube-api-access-72shx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.037971 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.038081 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.046409 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.046895 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.057088 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-72shx\" (UniqueName: \"kubernetes.io/projected/900e21ae-3c90-4e70-90e5-fbe81a902929-kube-api-access-72shx\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.067736 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.137121 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.251837 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-utilities\") pod \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.252011 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-catalog-content\") pod \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.252201 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsf2z\" (UniqueName: \"kubernetes.io/projected/ff320889-d8d3-4c5d-90cc-f8e655996a5c-kube-api-access-fsf2z\") pod \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\" (UID: \"ff320889-d8d3-4c5d-90cc-f8e655996a5c\") " Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.252851 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-utilities" (OuterVolumeSpecName: "utilities") pod "ff320889-d8d3-4c5d-90cc-f8e655996a5c" (UID: "ff320889-d8d3-4c5d-90cc-f8e655996a5c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.263990 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff320889-d8d3-4c5d-90cc-f8e655996a5c-kube-api-access-fsf2z" (OuterVolumeSpecName: "kube-api-access-fsf2z") pod "ff320889-d8d3-4c5d-90cc-f8e655996a5c" (UID: "ff320889-d8d3-4c5d-90cc-f8e655996a5c"). InnerVolumeSpecName "kube-api-access-fsf2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.354838 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.354872 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsf2z\" (UniqueName: \"kubernetes.io/projected/ff320889-d8d3-4c5d-90cc-f8e655996a5c-kube-api-access-fsf2z\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.449159 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff320889-d8d3-4c5d-90cc-f8e655996a5c" (UID: "ff320889-d8d3-4c5d-90cc-f8e655996a5c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.457042 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff320889-d8d3-4c5d-90cc-f8e655996a5c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.677558 4712 generic.go:334] "Generic (PLEG): container finished" podID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerID="e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa" exitCode=0 Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.677602 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerDied","Data":"e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa"} Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.677628 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t6mf6" event={"ID":"ff320889-d8d3-4c5d-90cc-f8e655996a5c","Type":"ContainerDied","Data":"4c5d821d62b7a6daa52b8f6084a5a4b7cc0a0bebf21efefe4009948e748816a2"} Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.677643 4712 scope.go:117] "RemoveContainer" containerID="e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.677659 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t6mf6" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.705292 4712 scope.go:117] "RemoveContainer" containerID="ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.721010 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t6mf6"] Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.730422 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t6mf6"] Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.777736 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz"] Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.781910 4712 scope.go:117] "RemoveContainer" containerID="2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.817766 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" path="/var/lib/kubelet/pods/ff320889-d8d3-4c5d-90cc-f8e655996a5c/volumes" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.820574 4712 scope.go:117] "RemoveContainer" containerID="e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa" Jan 30 17:30:59 crc kubenswrapper[4712]: E0130 17:30:59.821855 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa\": container with ID starting with e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa not found: ID does not exist" containerID="e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.821991 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa"} err="failed to get container status \"e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa\": rpc error: code = NotFound desc = could not find container \"e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa\": container with ID starting with e1cef446ab47db424f8156691f50f4251fbdd6685da0d25c89cc640d0218a3aa not found: ID does not exist" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.822083 4712 scope.go:117] "RemoveContainer" containerID="ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1" Jan 30 17:30:59 crc kubenswrapper[4712]: E0130 17:30:59.827006 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1\": container with ID starting with ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1 not found: ID does not exist" containerID="ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.827064 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1"} err="failed to get container status \"ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1\": rpc error: code = NotFound desc = could not find container \"ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1\": container with ID starting with ba34f76c8fc93f5d005ab213eb9635a2cebc82870d785c37d2a94c761f42c0f1 not found: ID does not exist" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.827100 4712 scope.go:117] "RemoveContainer" containerID="2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a" Jan 30 17:30:59 crc kubenswrapper[4712]: E0130 17:30:59.830139 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a\": container with ID starting with 2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a not found: ID does not exist" containerID="2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a" Jan 30 17:30:59 crc kubenswrapper[4712]: I0130 17:30:59.830170 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a"} err="failed to get container status \"2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a\": rpc error: code = NotFound desc = could not find container \"2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a\": container with ID starting with 2ff891d775ad91909a013f32d02c110d19ee6de5dea676c9ee36a839e0e9095a not found: ID does not exist" Jan 30 17:31:00 crc kubenswrapper[4712]: I0130 17:31:00.689871 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" event={"ID":"900e21ae-3c90-4e70-90e5-fbe81a902929","Type":"ContainerStarted","Data":"a6c434cfce4448dfd0dc0846f887dfc446c798a1c77b9f040d503b502d9e9395"} Jan 30 17:31:00 crc kubenswrapper[4712]: I0130 17:31:00.690204 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" event={"ID":"900e21ae-3c90-4e70-90e5-fbe81a902929","Type":"ContainerStarted","Data":"696e5294d366d818f3bf90537c1ba0e644237770f7f0e73378dbdbbf2a8a5074"} Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.653433 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" podStartSLOduration=18.099609272 podStartE2EDuration="18.65341445s" podCreationTimestamp="2026-01-30 17:30:58 +0000 UTC" firstStartedPulling="2026-01-30 17:30:59.820870679 +0000 UTC m=+2196.727880148" lastFinishedPulling="2026-01-30 17:31:00.374675847 +0000 UTC m=+2197.281685326" observedRunningTime="2026-01-30 17:31:00.709911522 +0000 UTC m=+2197.616920991" watchObservedRunningTime="2026-01-30 17:31:16.65341445 +0000 UTC m=+2213.560423919" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.657544 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-crlk6"] Jan 30 17:31:16 crc kubenswrapper[4712]: E0130 17:31:16.658019 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.658043 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" Jan 30 17:31:16 crc kubenswrapper[4712]: E0130 17:31:16.658077 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="extract-utilities" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.658089 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="extract-utilities" Jan 30 17:31:16 crc kubenswrapper[4712]: E0130 17:31:16.658113 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="extract-content" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.658120 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="extract-content" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.658334 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff320889-d8d3-4c5d-90cc-f8e655996a5c" containerName="registry-server" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.660035 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.688358 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crlk6"] Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.786724 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwtll\" (UniqueName: \"kubernetes.io/projected/0d82a705-906b-4d99-9db6-c91f774e0b77-kube-api-access-hwtll\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.786776 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-utilities\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.786999 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-catalog-content\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.889420 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwtll\" (UniqueName: \"kubernetes.io/projected/0d82a705-906b-4d99-9db6-c91f774e0b77-kube-api-access-hwtll\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.889769 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-utilities\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.889839 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-catalog-content\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.890562 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-catalog-content\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.890821 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-utilities\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.913539 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwtll\" (UniqueName: \"kubernetes.io/projected/0d82a705-906b-4d99-9db6-c91f774e0b77-kube-api-access-hwtll\") pod \"redhat-marketplace-crlk6\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:16 crc kubenswrapper[4712]: I0130 17:31:16.986998 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:17 crc kubenswrapper[4712]: I0130 17:31:17.563925 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-crlk6"] Jan 30 17:31:17 crc kubenswrapper[4712]: I0130 17:31:17.850594 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerStarted","Data":"faf3ef7c2c16538b9172795ce4fc3eccbb509cca2e753590dcb653d043dc2087"} Jan 30 17:31:18 crc kubenswrapper[4712]: I0130 17:31:18.860419 4712 generic.go:334] "Generic (PLEG): container finished" podID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerID="9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6" exitCode=0 Jan 30 17:31:18 crc kubenswrapper[4712]: I0130 17:31:18.860468 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerDied","Data":"9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6"} Jan 30 17:31:19 crc kubenswrapper[4712]: I0130 17:31:19.871108 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerStarted","Data":"a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724"} Jan 30 17:31:20 crc kubenswrapper[4712]: I0130 17:31:20.881923 4712 generic.go:334] "Generic (PLEG): container finished" podID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerID="a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724" exitCode=0 Jan 30 17:31:20 crc kubenswrapper[4712]: I0130 17:31:20.881987 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerDied","Data":"a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724"} Jan 30 17:31:21 crc kubenswrapper[4712]: I0130 17:31:21.897655 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerStarted","Data":"37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c"} Jan 30 17:31:21 crc kubenswrapper[4712]: I0130 17:31:21.927820 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-crlk6" podStartSLOduration=3.33441339 podStartE2EDuration="5.927782387s" podCreationTimestamp="2026-01-30 17:31:16 +0000 UTC" firstStartedPulling="2026-01-30 17:31:18.86217263 +0000 UTC m=+2215.769182099" lastFinishedPulling="2026-01-30 17:31:21.455541627 +0000 UTC m=+2218.362551096" observedRunningTime="2026-01-30 17:31:21.923622464 +0000 UTC m=+2218.830631933" watchObservedRunningTime="2026-01-30 17:31:21.927782387 +0000 UTC m=+2218.834791856" Jan 30 17:31:26 crc kubenswrapper[4712]: I0130 17:31:26.988061 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:26 crc kubenswrapper[4712]: I0130 17:31:26.988979 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:27 crc kubenswrapper[4712]: I0130 17:31:27.038349 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:28 crc kubenswrapper[4712]: I0130 17:31:28.001708 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:28 crc kubenswrapper[4712]: I0130 17:31:28.075530 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crlk6"] Jan 30 17:31:29 crc kubenswrapper[4712]: I0130 17:31:29.988577 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-crlk6" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="registry-server" containerID="cri-o://37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c" gracePeriod=2 Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.406355 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.516928 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-catalog-content\") pod \"0d82a705-906b-4d99-9db6-c91f774e0b77\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.517163 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-utilities\") pod \"0d82a705-906b-4d99-9db6-c91f774e0b77\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.517219 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwtll\" (UniqueName: \"kubernetes.io/projected/0d82a705-906b-4d99-9db6-c91f774e0b77-kube-api-access-hwtll\") pod \"0d82a705-906b-4d99-9db6-c91f774e0b77\" (UID: \"0d82a705-906b-4d99-9db6-c91f774e0b77\") " Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.518617 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-utilities" (OuterVolumeSpecName: "utilities") pod "0d82a705-906b-4d99-9db6-c91f774e0b77" (UID: "0d82a705-906b-4d99-9db6-c91f774e0b77"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.523647 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d82a705-906b-4d99-9db6-c91f774e0b77-kube-api-access-hwtll" (OuterVolumeSpecName: "kube-api-access-hwtll") pod "0d82a705-906b-4d99-9db6-c91f774e0b77" (UID: "0d82a705-906b-4d99-9db6-c91f774e0b77"). InnerVolumeSpecName "kube-api-access-hwtll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.570108 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d82a705-906b-4d99-9db6-c91f774e0b77" (UID: "0d82a705-906b-4d99-9db6-c91f774e0b77"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.619493 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.619532 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwtll\" (UniqueName: \"kubernetes.io/projected/0d82a705-906b-4d99-9db6-c91f774e0b77-kube-api-access-hwtll\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:30 crc kubenswrapper[4712]: I0130 17:31:30.619546 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d82a705-906b-4d99-9db6-c91f774e0b77-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.000270 4712 generic.go:334] "Generic (PLEG): container finished" podID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerID="37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c" exitCode=0 Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.000316 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerDied","Data":"37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c"} Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.000644 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-crlk6" event={"ID":"0d82a705-906b-4d99-9db6-c91f774e0b77","Type":"ContainerDied","Data":"faf3ef7c2c16538b9172795ce4fc3eccbb509cca2e753590dcb653d043dc2087"} Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.000670 4712 scope.go:117] "RemoveContainer" containerID="37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.000378 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-crlk6" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.048987 4712 scope.go:117] "RemoveContainer" containerID="a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.049618 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-crlk6"] Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.060880 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-crlk6"] Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.082572 4712 scope.go:117] "RemoveContainer" containerID="9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.115965 4712 scope.go:117] "RemoveContainer" containerID="37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c" Jan 30 17:31:31 crc kubenswrapper[4712]: E0130 17:31:31.116550 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c\": container with ID starting with 37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c not found: ID does not exist" containerID="37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.116609 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c"} err="failed to get container status \"37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c\": rpc error: code = NotFound desc = could not find container \"37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c\": container with ID starting with 37a6e67b25d67defe6109ae3a403158c49761c1b2fa42e7a981fb53d7435c24c not found: ID does not exist" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.116641 4712 scope.go:117] "RemoveContainer" containerID="a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724" Jan 30 17:31:31 crc kubenswrapper[4712]: E0130 17:31:31.117028 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724\": container with ID starting with a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724 not found: ID does not exist" containerID="a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.117059 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724"} err="failed to get container status \"a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724\": rpc error: code = NotFound desc = could not find container \"a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724\": container with ID starting with a9bb8a3c415c8c03845b30f91ee5a12b7c9018154ddf577381597e1e7e7c6724 not found: ID does not exist" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.117075 4712 scope.go:117] "RemoveContainer" containerID="9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6" Jan 30 17:31:31 crc kubenswrapper[4712]: E0130 17:31:31.117321 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6\": container with ID starting with 9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6 not found: ID does not exist" containerID="9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.117356 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6"} err="failed to get container status \"9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6\": rpc error: code = NotFound desc = could not find container \"9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6\": container with ID starting with 9728dcbdbaffd173667c34b879c1c387a0885ef58518c1259c127bf4e31a7fa6 not found: ID does not exist" Jan 30 17:31:31 crc kubenswrapper[4712]: I0130 17:31:31.813043 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" path="/var/lib/kubelet/pods/0d82a705-906b-4d99-9db6-c91f774e0b77/volumes" Jan 30 17:31:36 crc kubenswrapper[4712]: I0130 17:31:36.270873 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:31:36 crc kubenswrapper[4712]: I0130 17:31:36.271218 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:31:38 crc kubenswrapper[4712]: I0130 17:31:38.935972 4712 scope.go:117] "RemoveContainer" containerID="86f837b53bb45f244a56c9f0b76e59de96863d52e2babb4ea69db0df5bbb6e1c" Jan 30 17:31:48 crc kubenswrapper[4712]: I0130 17:31:48.173757 4712 generic.go:334] "Generic (PLEG): container finished" podID="900e21ae-3c90-4e70-90e5-fbe81a902929" containerID="a6c434cfce4448dfd0dc0846f887dfc446c798a1c77b9f040d503b502d9e9395" exitCode=0 Jan 30 17:31:48 crc kubenswrapper[4712]: I0130 17:31:48.173871 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" event={"ID":"900e21ae-3c90-4e70-90e5-fbe81a902929","Type":"ContainerDied","Data":"a6c434cfce4448dfd0dc0846f887dfc446c798a1c77b9f040d503b502d9e9395"} Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.569498 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.743570 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-72shx\" (UniqueName: \"kubernetes.io/projected/900e21ae-3c90-4e70-90e5-fbe81a902929-kube-api-access-72shx\") pod \"900e21ae-3c90-4e70-90e5-fbe81a902929\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.743709 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-inventory\") pod \"900e21ae-3c90-4e70-90e5-fbe81a902929\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.743767 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-ssh-key-openstack-edpm-ipam\") pod \"900e21ae-3c90-4e70-90e5-fbe81a902929\" (UID: \"900e21ae-3c90-4e70-90e5-fbe81a902929\") " Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.756044 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900e21ae-3c90-4e70-90e5-fbe81a902929-kube-api-access-72shx" (OuterVolumeSpecName: "kube-api-access-72shx") pod "900e21ae-3c90-4e70-90e5-fbe81a902929" (UID: "900e21ae-3c90-4e70-90e5-fbe81a902929"). InnerVolumeSpecName "kube-api-access-72shx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.776699 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-inventory" (OuterVolumeSpecName: "inventory") pod "900e21ae-3c90-4e70-90e5-fbe81a902929" (UID: "900e21ae-3c90-4e70-90e5-fbe81a902929"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.779681 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "900e21ae-3c90-4e70-90e5-fbe81a902929" (UID: "900e21ae-3c90-4e70-90e5-fbe81a902929"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.847610 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-72shx\" (UniqueName: \"kubernetes.io/projected/900e21ae-3c90-4e70-90e5-fbe81a902929-kube-api-access-72shx\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.847936 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:49 crc kubenswrapper[4712]: I0130 17:31:49.848065 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/900e21ae-3c90-4e70-90e5-fbe81a902929-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.192015 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" event={"ID":"900e21ae-3c90-4e70-90e5-fbe81a902929","Type":"ContainerDied","Data":"696e5294d366d818f3bf90537c1ba0e644237770f7f0e73378dbdbbf2a8a5074"} Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.192070 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="696e5294d366d818f3bf90537c1ba0e644237770f7f0e73378dbdbbf2a8a5074" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.192125 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.291992 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8dq8h"] Jan 30 17:31:50 crc kubenswrapper[4712]: E0130 17:31:50.292468 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="extract-utilities" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.292491 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="extract-utilities" Jan 30 17:31:50 crc kubenswrapper[4712]: E0130 17:31:50.292508 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900e21ae-3c90-4e70-90e5-fbe81a902929" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.292518 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="900e21ae-3c90-4e70-90e5-fbe81a902929" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:31:50 crc kubenswrapper[4712]: E0130 17:31:50.292567 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="registry-server" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.292576 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="registry-server" Jan 30 17:31:50 crc kubenswrapper[4712]: E0130 17:31:50.292591 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="extract-content" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.292598 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="extract-content" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.292810 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="900e21ae-3c90-4e70-90e5-fbe81a902929" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.292833 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d82a705-906b-4d99-9db6-c91f774e0b77" containerName="registry-server" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.293631 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.295742 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.295956 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.296643 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.296866 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.318769 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8dq8h"] Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.355277 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.355376 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5pvd\" (UniqueName: \"kubernetes.io/projected/91e0c680-dd16-41a4-9a12-59cf6d36151c-kube-api-access-d5pvd\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.355481 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.456745 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.456866 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.456993 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5pvd\" (UniqueName: \"kubernetes.io/projected/91e0c680-dd16-41a4-9a12-59cf6d36151c-kube-api-access-d5pvd\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.478984 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.482346 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.521019 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5pvd\" (UniqueName: \"kubernetes.io/projected/91e0c680-dd16-41a4-9a12-59cf6d36151c-kube-api-access-d5pvd\") pod \"ssh-known-hosts-edpm-deployment-8dq8h\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:50 crc kubenswrapper[4712]: I0130 17:31:50.612856 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:31:51 crc kubenswrapper[4712]: I0130 17:31:51.111668 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-8dq8h"] Jan 30 17:31:51 crc kubenswrapper[4712]: I0130 17:31:51.200677 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" event={"ID":"91e0c680-dd16-41a4-9a12-59cf6d36151c","Type":"ContainerStarted","Data":"c97d2a36cc852865bc4ec1495785553023c64328d6feeeeaa0355a18718bff54"} Jan 30 17:31:52 crc kubenswrapper[4712]: I0130 17:31:52.210252 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" event={"ID":"91e0c680-dd16-41a4-9a12-59cf6d36151c","Type":"ContainerStarted","Data":"0973bd46ac4baef0f6b3edacd7696c0125f8836e04896ae6320d8fa2f9ee3bf8"} Jan 30 17:31:52 crc kubenswrapper[4712]: I0130 17:31:52.228494 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" podStartSLOduration=1.769129656 podStartE2EDuration="2.228474928s" podCreationTimestamp="2026-01-30 17:31:50 +0000 UTC" firstStartedPulling="2026-01-30 17:31:51.119269586 +0000 UTC m=+2248.026279055" lastFinishedPulling="2026-01-30 17:31:51.578614858 +0000 UTC m=+2248.485624327" observedRunningTime="2026-01-30 17:31:52.222736491 +0000 UTC m=+2249.129745980" watchObservedRunningTime="2026-01-30 17:31:52.228474928 +0000 UTC m=+2249.135484397" Jan 30 17:31:59 crc kubenswrapper[4712]: I0130 17:31:59.285725 4712 generic.go:334] "Generic (PLEG): container finished" podID="91e0c680-dd16-41a4-9a12-59cf6d36151c" containerID="0973bd46ac4baef0f6b3edacd7696c0125f8836e04896ae6320d8fa2f9ee3bf8" exitCode=0 Jan 30 17:31:59 crc kubenswrapper[4712]: I0130 17:31:59.285842 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" event={"ID":"91e0c680-dd16-41a4-9a12-59cf6d36151c","Type":"ContainerDied","Data":"0973bd46ac4baef0f6b3edacd7696c0125f8836e04896ae6320d8fa2f9ee3bf8"} Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.714218 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.841904 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d5pvd\" (UniqueName: \"kubernetes.io/projected/91e0c680-dd16-41a4-9a12-59cf6d36151c-kube-api-access-d5pvd\") pod \"91e0c680-dd16-41a4-9a12-59cf6d36151c\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.842000 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-inventory-0\") pod \"91e0c680-dd16-41a4-9a12-59cf6d36151c\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.843024 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-ssh-key-openstack-edpm-ipam\") pod \"91e0c680-dd16-41a4-9a12-59cf6d36151c\" (UID: \"91e0c680-dd16-41a4-9a12-59cf6d36151c\") " Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.847629 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e0c680-dd16-41a4-9a12-59cf6d36151c-kube-api-access-d5pvd" (OuterVolumeSpecName: "kube-api-access-d5pvd") pod "91e0c680-dd16-41a4-9a12-59cf6d36151c" (UID: "91e0c680-dd16-41a4-9a12-59cf6d36151c"). InnerVolumeSpecName "kube-api-access-d5pvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.872131 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "91e0c680-dd16-41a4-9a12-59cf6d36151c" (UID: "91e0c680-dd16-41a4-9a12-59cf6d36151c"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.875035 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "91e0c680-dd16-41a4-9a12-59cf6d36151c" (UID: "91e0c680-dd16-41a4-9a12-59cf6d36151c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.946229 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.946527 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d5pvd\" (UniqueName: \"kubernetes.io/projected/91e0c680-dd16-41a4-9a12-59cf6d36151c-kube-api-access-d5pvd\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:00 crc kubenswrapper[4712]: I0130 17:32:00.946632 4712 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/91e0c680-dd16-41a4-9a12-59cf6d36151c-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.303144 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" event={"ID":"91e0c680-dd16-41a4-9a12-59cf6d36151c","Type":"ContainerDied","Data":"c97d2a36cc852865bc4ec1495785553023c64328d6feeeeaa0355a18718bff54"} Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.303189 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-8dq8h" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.303197 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c97d2a36cc852865bc4ec1495785553023c64328d6feeeeaa0355a18718bff54" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.398333 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5"] Jan 30 17:32:01 crc kubenswrapper[4712]: E0130 17:32:01.399017 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e0c680-dd16-41a4-9a12-59cf6d36151c" containerName="ssh-known-hosts-edpm-deployment" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.399142 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e0c680-dd16-41a4-9a12-59cf6d36151c" containerName="ssh-known-hosts-edpm-deployment" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.399457 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e0c680-dd16-41a4-9a12-59cf6d36151c" containerName="ssh-known-hosts-edpm-deployment" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.400382 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.403010 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.404853 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.405192 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.407146 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.409679 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5"] Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.560006 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.560536 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9twz\" (UniqueName: \"kubernetes.io/projected/0c49e3a4-cabe-47df-aa07-12276d5aa590-kube-api-access-m9twz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.560710 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.662649 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.662741 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9twz\" (UniqueName: \"kubernetes.io/projected/0c49e3a4-cabe-47df-aa07-12276d5aa590-kube-api-access-m9twz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.662782 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.670805 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.671071 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.681489 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9twz\" (UniqueName: \"kubernetes.io/projected/0c49e3a4-cabe-47df-aa07-12276d5aa590-kube-api-access-m9twz\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-7q5g5\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:01 crc kubenswrapper[4712]: I0130 17:32:01.727176 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:02 crc kubenswrapper[4712]: I0130 17:32:02.278134 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5"] Jan 30 17:32:02 crc kubenswrapper[4712]: I0130 17:32:02.313412 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" event={"ID":"0c49e3a4-cabe-47df-aa07-12276d5aa590","Type":"ContainerStarted","Data":"d04ce88e005f62f4554065247c885ae8ece492fce58a20c56bbdf7d37fca7b47"} Jan 30 17:32:03 crc kubenswrapper[4712]: I0130 17:32:03.323972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" event={"ID":"0c49e3a4-cabe-47df-aa07-12276d5aa590","Type":"ContainerStarted","Data":"a9fcfe21af07b8a7c535dec7c6a80da3ec388fb1912a929159274ca12dd3d9cc"} Jan 30 17:32:03 crc kubenswrapper[4712]: I0130 17:32:03.344017 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" podStartSLOduration=1.947215674 podStartE2EDuration="2.343997031s" podCreationTimestamp="2026-01-30 17:32:01 +0000 UTC" firstStartedPulling="2026-01-30 17:32:02.287951864 +0000 UTC m=+2259.194961333" lastFinishedPulling="2026-01-30 17:32:02.684733221 +0000 UTC m=+2259.591742690" observedRunningTime="2026-01-30 17:32:03.340285798 +0000 UTC m=+2260.247295277" watchObservedRunningTime="2026-01-30 17:32:03.343997031 +0000 UTC m=+2260.251006520" Jan 30 17:32:06 crc kubenswrapper[4712]: I0130 17:32:06.271859 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:32:06 crc kubenswrapper[4712]: I0130 17:32:06.272241 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:32:10 crc kubenswrapper[4712]: I0130 17:32:10.395431 4712 generic.go:334] "Generic (PLEG): container finished" podID="0c49e3a4-cabe-47df-aa07-12276d5aa590" containerID="a9fcfe21af07b8a7c535dec7c6a80da3ec388fb1912a929159274ca12dd3d9cc" exitCode=0 Jan 30 17:32:10 crc kubenswrapper[4712]: I0130 17:32:10.395575 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" event={"ID":"0c49e3a4-cabe-47df-aa07-12276d5aa590","Type":"ContainerDied","Data":"a9fcfe21af07b8a7c535dec7c6a80da3ec388fb1912a929159274ca12dd3d9cc"} Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.827277 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.954484 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-inventory\") pod \"0c49e3a4-cabe-47df-aa07-12276d5aa590\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.954600 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9twz\" (UniqueName: \"kubernetes.io/projected/0c49e3a4-cabe-47df-aa07-12276d5aa590-kube-api-access-m9twz\") pod \"0c49e3a4-cabe-47df-aa07-12276d5aa590\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.954695 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-ssh-key-openstack-edpm-ipam\") pod \"0c49e3a4-cabe-47df-aa07-12276d5aa590\" (UID: \"0c49e3a4-cabe-47df-aa07-12276d5aa590\") " Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.960715 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c49e3a4-cabe-47df-aa07-12276d5aa590-kube-api-access-m9twz" (OuterVolumeSpecName: "kube-api-access-m9twz") pod "0c49e3a4-cabe-47df-aa07-12276d5aa590" (UID: "0c49e3a4-cabe-47df-aa07-12276d5aa590"). InnerVolumeSpecName "kube-api-access-m9twz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.987480 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "0c49e3a4-cabe-47df-aa07-12276d5aa590" (UID: "0c49e3a4-cabe-47df-aa07-12276d5aa590"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:11 crc kubenswrapper[4712]: I0130 17:32:11.988349 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-inventory" (OuterVolumeSpecName: "inventory") pod "0c49e3a4-cabe-47df-aa07-12276d5aa590" (UID: "0c49e3a4-cabe-47df-aa07-12276d5aa590"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.058261 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.058303 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9twz\" (UniqueName: \"kubernetes.io/projected/0c49e3a4-cabe-47df-aa07-12276d5aa590-kube-api-access-m9twz\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.058315 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/0c49e3a4-cabe-47df-aa07-12276d5aa590-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.413911 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" event={"ID":"0c49e3a4-cabe-47df-aa07-12276d5aa590","Type":"ContainerDied","Data":"d04ce88e005f62f4554065247c885ae8ece492fce58a20c56bbdf7d37fca7b47"} Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.414275 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d04ce88e005f62f4554065247c885ae8ece492fce58a20c56bbdf7d37fca7b47" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.413958 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-7q5g5" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.491586 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv"] Jan 30 17:32:12 crc kubenswrapper[4712]: E0130 17:32:12.492211 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c49e3a4-cabe-47df-aa07-12276d5aa590" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.492238 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c49e3a4-cabe-47df-aa07-12276d5aa590" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.492480 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c49e3a4-cabe-47df-aa07-12276d5aa590" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.493154 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.495282 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.495347 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.495943 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.496222 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.508338 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv"] Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.566050 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbvhb\" (UniqueName: \"kubernetes.io/projected/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-kube-api-access-rbvhb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.566178 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.566240 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.667240 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbvhb\" (UniqueName: \"kubernetes.io/projected/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-kube-api-access-rbvhb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.667302 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.667359 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.676537 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.677287 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.699419 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbvhb\" (UniqueName: \"kubernetes.io/projected/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-kube-api-access-rbvhb\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:12 crc kubenswrapper[4712]: I0130 17:32:12.813230 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:13 crc kubenswrapper[4712]: I0130 17:32:13.332699 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv"] Jan 30 17:32:13 crc kubenswrapper[4712]: I0130 17:32:13.424051 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" event={"ID":"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6","Type":"ContainerStarted","Data":"6f9040f625ab76548d6097d5d1a204b184b406507758d3b3a6c3068e3b45f375"} Jan 30 17:32:14 crc kubenswrapper[4712]: I0130 17:32:14.437380 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" event={"ID":"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6","Type":"ContainerStarted","Data":"c6c0acb96048ff75fc28dd5538b36d0450dff94965cd4ea78d040f0dc48c768b"} Jan 30 17:32:14 crc kubenswrapper[4712]: I0130 17:32:14.452532 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" podStartSLOduration=1.898477384 podStartE2EDuration="2.452510978s" podCreationTimestamp="2026-01-30 17:32:12 +0000 UTC" firstStartedPulling="2026-01-30 17:32:13.335683875 +0000 UTC m=+2270.242693344" lastFinishedPulling="2026-01-30 17:32:13.889717469 +0000 UTC m=+2270.796726938" observedRunningTime="2026-01-30 17:32:14.449611034 +0000 UTC m=+2271.356620523" watchObservedRunningTime="2026-01-30 17:32:14.452510978 +0000 UTC m=+2271.359520447" Jan 30 17:32:23 crc kubenswrapper[4712]: I0130 17:32:23.510568 4712 generic.go:334] "Generic (PLEG): container finished" podID="943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" containerID="c6c0acb96048ff75fc28dd5538b36d0450dff94965cd4ea78d040f0dc48c768b" exitCode=0 Jan 30 17:32:23 crc kubenswrapper[4712]: I0130 17:32:23.510695 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" event={"ID":"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6","Type":"ContainerDied","Data":"c6c0acb96048ff75fc28dd5538b36d0450dff94965cd4ea78d040f0dc48c768b"} Jan 30 17:32:24 crc kubenswrapper[4712]: I0130 17:32:24.922766 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.067216 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-inventory\") pod \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.067368 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbvhb\" (UniqueName: \"kubernetes.io/projected/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-kube-api-access-rbvhb\") pod \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.067495 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-ssh-key-openstack-edpm-ipam\") pod \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\" (UID: \"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6\") " Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.072673 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-kube-api-access-rbvhb" (OuterVolumeSpecName: "kube-api-access-rbvhb") pod "943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" (UID: "943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6"). InnerVolumeSpecName "kube-api-access-rbvhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.097663 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" (UID: "943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.100931 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-inventory" (OuterVolumeSpecName: "inventory") pod "943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" (UID: "943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.169129 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.169165 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.169175 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbvhb\" (UniqueName: \"kubernetes.io/projected/943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6-kube-api-access-rbvhb\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.539016 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" event={"ID":"943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6","Type":"ContainerDied","Data":"6f9040f625ab76548d6097d5d1a204b184b406507758d3b3a6c3068e3b45f375"} Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.539081 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f9040f625ab76548d6097d5d1a204b184b406507758d3b3a6c3068e3b45f375" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.539217 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.614218 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth"] Jan 30 17:32:25 crc kubenswrapper[4712]: E0130 17:32:25.614610 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.614627 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.614858 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.615604 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.621831 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.622577 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.622787 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.623039 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.623175 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.623259 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.623295 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.623266 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.650460 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth"] Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786420 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786488 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786530 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786637 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hgv\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-kube-api-access-n7hgv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786869 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786928 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.786952 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787024 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787073 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787113 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787266 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787326 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787389 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.787459 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.888923 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.889992 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890416 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890477 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890511 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890540 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890598 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890624 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890669 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890715 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890781 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890835 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890860 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.890889 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7hgv\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-kube-api-access-n7hgv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.895477 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.896311 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.899441 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.901272 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.902729 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.903664 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.904896 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.905494 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.906791 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.907132 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.907277 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.907617 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.910159 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.913458 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7hgv\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-kube-api-access-n7hgv\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-qsnth\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:25 crc kubenswrapper[4712]: I0130 17:32:25.944047 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:32:26 crc kubenswrapper[4712]: I0130 17:32:26.509792 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth"] Jan 30 17:32:26 crc kubenswrapper[4712]: W0130 17:32:26.515761 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1273af18_d0dd_4c8e_a454_097a3f00110d.slice/crio-d2217bfe4b02b69ce53bfdab179f9c10b01dcb7778e27c8d8f3f54f8241f63f7 WatchSource:0}: Error finding container d2217bfe4b02b69ce53bfdab179f9c10b01dcb7778e27c8d8f3f54f8241f63f7: Status 404 returned error can't find the container with id d2217bfe4b02b69ce53bfdab179f9c10b01dcb7778e27c8d8f3f54f8241f63f7 Jan 30 17:32:26 crc kubenswrapper[4712]: I0130 17:32:26.547393 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" event={"ID":"1273af18-d0dd-4c8e-a454-097a3f00110d","Type":"ContainerStarted","Data":"d2217bfe4b02b69ce53bfdab179f9c10b01dcb7778e27c8d8f3f54f8241f63f7"} Jan 30 17:32:28 crc kubenswrapper[4712]: I0130 17:32:28.564963 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" event={"ID":"1273af18-d0dd-4c8e-a454-097a3f00110d","Type":"ContainerStarted","Data":"00fbb83f09ac4fad663de13dbec7103db6d32f7143127e74020eebb2b5b64ae4"} Jan 30 17:32:28 crc kubenswrapper[4712]: I0130 17:32:28.590554 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" podStartSLOduration=2.7658783380000003 podStartE2EDuration="3.590536287s" podCreationTimestamp="2026-01-30 17:32:25 +0000 UTC" firstStartedPulling="2026-01-30 17:32:26.519521446 +0000 UTC m=+2283.426530935" lastFinishedPulling="2026-01-30 17:32:27.344179375 +0000 UTC m=+2284.251188884" observedRunningTime="2026-01-30 17:32:28.58759909 +0000 UTC m=+2285.494608569" watchObservedRunningTime="2026-01-30 17:32:28.590536287 +0000 UTC m=+2285.497545756" Jan 30 17:32:30 crc kubenswrapper[4712]: I0130 17:32:30.943590 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-24gg5"] Jan 30 17:32:30 crc kubenswrapper[4712]: I0130 17:32:30.945743 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:30 crc kubenswrapper[4712]: I0130 17:32:30.967396 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-24gg5"] Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.099497 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-utilities\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.099813 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpm4\" (UniqueName: \"kubernetes.io/projected/00139aba-fce1-4e1a-9f70-8e753c18ba7c-kube-api-access-hlpm4\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.099929 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-catalog-content\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.201545 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlpm4\" (UniqueName: \"kubernetes.io/projected/00139aba-fce1-4e1a-9f70-8e753c18ba7c-kube-api-access-hlpm4\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.201890 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-catalog-content\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.202074 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-utilities\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.202302 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-catalog-content\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.202367 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-utilities\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.230534 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlpm4\" (UniqueName: \"kubernetes.io/projected/00139aba-fce1-4e1a-9f70-8e753c18ba7c-kube-api-access-hlpm4\") pod \"community-operators-24gg5\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.272936 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:31 crc kubenswrapper[4712]: I0130 17:32:31.909369 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-24gg5"] Jan 30 17:32:31 crc kubenswrapper[4712]: W0130 17:32:31.914265 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00139aba_fce1_4e1a_9f70_8e753c18ba7c.slice/crio-dda7b6512f1d847db5b0103f481d75c3b20c2455e4648e55c58e924368624734 WatchSource:0}: Error finding container dda7b6512f1d847db5b0103f481d75c3b20c2455e4648e55c58e924368624734: Status 404 returned error can't find the container with id dda7b6512f1d847db5b0103f481d75c3b20c2455e4648e55c58e924368624734 Jan 30 17:32:32 crc kubenswrapper[4712]: I0130 17:32:32.624120 4712 generic.go:334] "Generic (PLEG): container finished" podID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerID="53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3" exitCode=0 Jan 30 17:32:32 crc kubenswrapper[4712]: I0130 17:32:32.624179 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerDied","Data":"53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3"} Jan 30 17:32:32 crc kubenswrapper[4712]: I0130 17:32:32.624411 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerStarted","Data":"dda7b6512f1d847db5b0103f481d75c3b20c2455e4648e55c58e924368624734"} Jan 30 17:32:34 crc kubenswrapper[4712]: I0130 17:32:34.644343 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerStarted","Data":"81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe"} Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.270867 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.271169 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.271215 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.271999 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.272055 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" gracePeriod=600 Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.665981 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" exitCode=0 Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.666043 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d"} Jan 30 17:32:36 crc kubenswrapper[4712]: I0130 17:32:36.666086 4712 scope.go:117] "RemoveContainer" containerID="6bff8f420280843f1dcba83eeb7d6607277904ab6cbb2965c10673f888b9f646" Jan 30 17:32:36 crc kubenswrapper[4712]: E0130 17:32:36.809455 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:32:37 crc kubenswrapper[4712]: I0130 17:32:37.677068 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:32:37 crc kubenswrapper[4712]: E0130 17:32:37.677546 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:32:42 crc kubenswrapper[4712]: I0130 17:32:42.732551 4712 generic.go:334] "Generic (PLEG): container finished" podID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerID="81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe" exitCode=0 Jan 30 17:32:42 crc kubenswrapper[4712]: I0130 17:32:42.732745 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerDied","Data":"81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe"} Jan 30 17:32:44 crc kubenswrapper[4712]: I0130 17:32:44.754660 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerStarted","Data":"4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa"} Jan 30 17:32:44 crc kubenswrapper[4712]: I0130 17:32:44.776457 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-24gg5" podStartSLOduration=3.9027717490000002 podStartE2EDuration="14.776434018s" podCreationTimestamp="2026-01-30 17:32:30 +0000 UTC" firstStartedPulling="2026-01-30 17:32:32.626135662 +0000 UTC m=+2289.533145131" lastFinishedPulling="2026-01-30 17:32:43.499797931 +0000 UTC m=+2300.406807400" observedRunningTime="2026-01-30 17:32:44.771935098 +0000 UTC m=+2301.678944587" watchObservedRunningTime="2026-01-30 17:32:44.776434018 +0000 UTC m=+2301.683443487" Jan 30 17:32:47 crc kubenswrapper[4712]: I0130 17:32:47.802038 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:32:47 crc kubenswrapper[4712]: E0130 17:32:47.802581 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:32:51 crc kubenswrapper[4712]: I0130 17:32:51.273722 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:51 crc kubenswrapper[4712]: I0130 17:32:51.274875 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:51 crc kubenswrapper[4712]: I0130 17:32:51.336698 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:51 crc kubenswrapper[4712]: I0130 17:32:51.858051 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:51 crc kubenswrapper[4712]: I0130 17:32:51.913531 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-24gg5"] Jan 30 17:32:53 crc kubenswrapper[4712]: I0130 17:32:53.845575 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-24gg5" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="registry-server" containerID="cri-o://4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa" gracePeriod=2 Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.364747 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.471359 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-utilities\") pod \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.471616 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-catalog-content\") pod \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.471664 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlpm4\" (UniqueName: \"kubernetes.io/projected/00139aba-fce1-4e1a-9f70-8e753c18ba7c-kube-api-access-hlpm4\") pod \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\" (UID: \"00139aba-fce1-4e1a-9f70-8e753c18ba7c\") " Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.472317 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-utilities" (OuterVolumeSpecName: "utilities") pod "00139aba-fce1-4e1a-9f70-8e753c18ba7c" (UID: "00139aba-fce1-4e1a-9f70-8e753c18ba7c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.477088 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00139aba-fce1-4e1a-9f70-8e753c18ba7c-kube-api-access-hlpm4" (OuterVolumeSpecName: "kube-api-access-hlpm4") pod "00139aba-fce1-4e1a-9f70-8e753c18ba7c" (UID: "00139aba-fce1-4e1a-9f70-8e753c18ba7c"). InnerVolumeSpecName "kube-api-access-hlpm4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.534814 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "00139aba-fce1-4e1a-9f70-8e753c18ba7c" (UID: "00139aba-fce1-4e1a-9f70-8e753c18ba7c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.573975 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.574017 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlpm4\" (UniqueName: \"kubernetes.io/projected/00139aba-fce1-4e1a-9f70-8e753c18ba7c-kube-api-access-hlpm4\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.574033 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00139aba-fce1-4e1a-9f70-8e753c18ba7c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.854135 4712 generic.go:334] "Generic (PLEG): container finished" podID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerID="4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa" exitCode=0 Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.854181 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerDied","Data":"4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa"} Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.854211 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-24gg5" event={"ID":"00139aba-fce1-4e1a-9f70-8e753c18ba7c","Type":"ContainerDied","Data":"dda7b6512f1d847db5b0103f481d75c3b20c2455e4648e55c58e924368624734"} Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.854232 4712 scope.go:117] "RemoveContainer" containerID="4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.854279 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-24gg5" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.875183 4712 scope.go:117] "RemoveContainer" containerID="81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.905821 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-24gg5"] Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.919667 4712 scope.go:117] "RemoveContainer" containerID="53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.923520 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-24gg5"] Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.967725 4712 scope.go:117] "RemoveContainer" containerID="4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa" Jan 30 17:32:54 crc kubenswrapper[4712]: E0130 17:32:54.968234 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa\": container with ID starting with 4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa not found: ID does not exist" containerID="4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.968272 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa"} err="failed to get container status \"4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa\": rpc error: code = NotFound desc = could not find container \"4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa\": container with ID starting with 4a75355ce40de88c1156686faa7a95677e45765c8c9ae8dbd0c405fd21e35eaa not found: ID does not exist" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.968297 4712 scope.go:117] "RemoveContainer" containerID="81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe" Jan 30 17:32:54 crc kubenswrapper[4712]: E0130 17:32:54.968680 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe\": container with ID starting with 81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe not found: ID does not exist" containerID="81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.969050 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe"} err="failed to get container status \"81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe\": rpc error: code = NotFound desc = could not find container \"81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe\": container with ID starting with 81e570e35a06655a28f001840e544530c444c29c927ac4dd9b453e81b5a444fe not found: ID does not exist" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.969270 4712 scope.go:117] "RemoveContainer" containerID="53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3" Jan 30 17:32:54 crc kubenswrapper[4712]: E0130 17:32:54.969943 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3\": container with ID starting with 53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3 not found: ID does not exist" containerID="53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3" Jan 30 17:32:54 crc kubenswrapper[4712]: I0130 17:32:54.970060 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3"} err="failed to get container status \"53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3\": rpc error: code = NotFound desc = could not find container \"53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3\": container with ID starting with 53ccf9d9a65a7fd6f2260734ad0b9d78f7682aed134d4d2c3054064a26f400b3 not found: ID does not exist" Jan 30 17:32:55 crc kubenswrapper[4712]: I0130 17:32:55.812678 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" path="/var/lib/kubelet/pods/00139aba-fce1-4e1a-9f70-8e753c18ba7c/volumes" Jan 30 17:33:00 crc kubenswrapper[4712]: I0130 17:33:00.800469 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:33:00 crc kubenswrapper[4712]: E0130 17:33:00.802192 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:33:02 crc kubenswrapper[4712]: I0130 17:33:02.942673 4712 generic.go:334] "Generic (PLEG): container finished" podID="1273af18-d0dd-4c8e-a454-097a3f00110d" containerID="00fbb83f09ac4fad663de13dbec7103db6d32f7143127e74020eebb2b5b64ae4" exitCode=0 Jan 30 17:33:02 crc kubenswrapper[4712]: I0130 17:33:02.942720 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" event={"ID":"1273af18-d0dd-4c8e-a454-097a3f00110d","Type":"ContainerDied","Data":"00fbb83f09ac4fad663de13dbec7103db6d32f7143127e74020eebb2b5b64ae4"} Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.379056 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466580 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ssh-key-openstack-edpm-ipam\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466657 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-ovn-default-certs-0\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466773 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-telemetry-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466804 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466829 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-libvirt-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466850 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-bootstrap-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466871 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-nova-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466893 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466909 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-neutron-metadata-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466942 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-inventory\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.466959 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-repo-setup-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.467027 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7hgv\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-kube-api-access-n7hgv\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.467089 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.467107 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ovn-combined-ca-bundle\") pod \"1273af18-d0dd-4c8e-a454-097a3f00110d\" (UID: \"1273af18-d0dd-4c8e-a454-097a3f00110d\") " Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.474342 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.475694 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.477746 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.478443 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.480538 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.481015 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.481678 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-kube-api-access-n7hgv" (OuterVolumeSpecName: "kube-api-access-n7hgv") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "kube-api-access-n7hgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.484701 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.484728 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.490462 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.491367 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.491451 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.525941 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-inventory" (OuterVolumeSpecName: "inventory") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.530092 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "1273af18-d0dd-4c8e-a454-097a3f00110d" (UID: "1273af18-d0dd-4c8e-a454-097a3f00110d"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.569179 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7hgv\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-kube-api-access-n7hgv\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.569405 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.569511 4712 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.569599 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.569745 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.569972 4712 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570201 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570305 4712 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570395 4712 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570480 4712 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570556 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/1273af18-d0dd-4c8e-a454-097a3f00110d-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570644 4712 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570770 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.570787 4712 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1273af18-d0dd-4c8e-a454-097a3f00110d-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.967428 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" event={"ID":"1273af18-d0dd-4c8e-a454-097a3f00110d","Type":"ContainerDied","Data":"d2217bfe4b02b69ce53bfdab179f9c10b01dcb7778e27c8d8f3f54f8241f63f7"} Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.967474 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2217bfe4b02b69ce53bfdab179f9c10b01dcb7778e27c8d8f3f54f8241f63f7" Jan 30 17:33:04 crc kubenswrapper[4712]: I0130 17:33:04.967533 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-qsnth" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.250528 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8"] Jan 30 17:33:05 crc kubenswrapper[4712]: E0130 17:33:05.251162 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="extract-content" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.251266 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="extract-content" Jan 30 17:33:05 crc kubenswrapper[4712]: E0130 17:33:05.251346 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="registry-server" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.251397 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="registry-server" Jan 30 17:33:05 crc kubenswrapper[4712]: E0130 17:33:05.251459 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="extract-utilities" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.251513 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="extract-utilities" Jan 30 17:33:05 crc kubenswrapper[4712]: E0130 17:33:05.251583 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1273af18-d0dd-4c8e-a454-097a3f00110d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.251641 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1273af18-d0dd-4c8e-a454-097a3f00110d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.251886 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1273af18-d0dd-4c8e-a454-097a3f00110d" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.251989 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="00139aba-fce1-4e1a-9f70-8e753c18ba7c" containerName="registry-server" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.252749 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.260734 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.261012 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.261054 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.263338 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.263548 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.267961 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8"] Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.393400 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.393500 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.393554 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.393672 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hc6jc\" (UniqueName: \"kubernetes.io/projected/aecfda8c-69d9-4b35-8c62-ff6112a3631e-kube-api-access-hc6jc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.393860 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.495462 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.495953 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hc6jc\" (UniqueName: \"kubernetes.io/projected/aecfda8c-69d9-4b35-8c62-ff6112a3631e-kube-api-access-hc6jc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.496025 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.496088 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.496159 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.497600 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.499822 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.500164 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.500552 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.512104 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hc6jc\" (UniqueName: \"kubernetes.io/projected/aecfda8c-69d9-4b35-8c62-ff6112a3631e-kube-api-access-hc6jc\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-dv9r8\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:05 crc kubenswrapper[4712]: I0130 17:33:05.575473 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:33:06 crc kubenswrapper[4712]: I0130 17:33:06.136941 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8"] Jan 30 17:33:06 crc kubenswrapper[4712]: I0130 17:33:06.987257 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" event={"ID":"aecfda8c-69d9-4b35-8c62-ff6112a3631e","Type":"ContainerStarted","Data":"607ef61dcb5450418008c6ba33375e0a9bc0781d1433406766fe000bb4f0d612"} Jan 30 17:33:08 crc kubenswrapper[4712]: I0130 17:33:08.000109 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" event={"ID":"aecfda8c-69d9-4b35-8c62-ff6112a3631e","Type":"ContainerStarted","Data":"8fa4c2e8f4ea62ce0fb4f3f670cbba018466658832141e88bb31b0674af01946"} Jan 30 17:33:08 crc kubenswrapper[4712]: I0130 17:33:08.023885 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" podStartSLOduration=2.249841397 podStartE2EDuration="3.023864126s" podCreationTimestamp="2026-01-30 17:33:05 +0000 UTC" firstStartedPulling="2026-01-30 17:33:06.141253089 +0000 UTC m=+2323.048262558" lastFinishedPulling="2026-01-30 17:33:06.915275778 +0000 UTC m=+2323.822285287" observedRunningTime="2026-01-30 17:33:08.015577632 +0000 UTC m=+2324.922587101" watchObservedRunningTime="2026-01-30 17:33:08.023864126 +0000 UTC m=+2324.930873595" Jan 30 17:33:13 crc kubenswrapper[4712]: I0130 17:33:13.808090 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:33:13 crc kubenswrapper[4712]: E0130 17:33:13.808824 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:33:25 crc kubenswrapper[4712]: I0130 17:33:25.801108 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:33:25 crc kubenswrapper[4712]: E0130 17:33:25.802079 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:33:36 crc kubenswrapper[4712]: I0130 17:33:36.800411 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:33:36 crc kubenswrapper[4712]: E0130 17:33:36.801187 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:33:47 crc kubenswrapper[4712]: I0130 17:33:47.801279 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:33:47 crc kubenswrapper[4712]: E0130 17:33:47.803367 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:33:58 crc kubenswrapper[4712]: I0130 17:33:58.800226 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:33:58 crc kubenswrapper[4712]: E0130 17:33:58.801102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:34:11 crc kubenswrapper[4712]: I0130 17:34:11.800123 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:34:11 crc kubenswrapper[4712]: E0130 17:34:11.800815 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.619195 4712 generic.go:334] "Generic (PLEG): container finished" podID="aecfda8c-69d9-4b35-8c62-ff6112a3631e" containerID="8fa4c2e8f4ea62ce0fb4f3f670cbba018466658832141e88bb31b0674af01946" exitCode=0 Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.619401 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" event={"ID":"aecfda8c-69d9-4b35-8c62-ff6112a3631e","Type":"ContainerDied","Data":"8fa4c2e8f4ea62ce0fb4f3f670cbba018466658832141e88bb31b0674af01946"} Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.730418 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-llvq7"] Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.733179 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.748783 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llvq7"] Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.923878 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-utilities\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.923982 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpgk\" (UniqueName: \"kubernetes.io/projected/88e02421-ff21-42bf-8e16-0618d9b0fde8-kube-api-access-4dpgk\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:12 crc kubenswrapper[4712]: I0130 17:34:12.924224 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-catalog-content\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.027307 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-catalog-content\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.026503 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-catalog-content\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.027615 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-utilities\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.028162 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-utilities\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.028379 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpgk\" (UniqueName: \"kubernetes.io/projected/88e02421-ff21-42bf-8e16-0618d9b0fde8-kube-api-access-4dpgk\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.054938 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpgk\" (UniqueName: \"kubernetes.io/projected/88e02421-ff21-42bf-8e16-0618d9b0fde8-kube-api-access-4dpgk\") pod \"certified-operators-llvq7\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.074481 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:13 crc kubenswrapper[4712]: I0130 17:34:13.641077 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-llvq7"] Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.013588 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.148464 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-inventory\") pod \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.148503 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ssh-key-openstack-edpm-ipam\") pod \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.148570 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovn-combined-ca-bundle\") pod \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.148599 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hc6jc\" (UniqueName: \"kubernetes.io/projected/aecfda8c-69d9-4b35-8c62-ff6112a3631e-kube-api-access-hc6jc\") pod \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.148629 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovncontroller-config-0\") pod \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\" (UID: \"aecfda8c-69d9-4b35-8c62-ff6112a3631e\") " Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.159120 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aecfda8c-69d9-4b35-8c62-ff6112a3631e-kube-api-access-hc6jc" (OuterVolumeSpecName: "kube-api-access-hc6jc") pod "aecfda8c-69d9-4b35-8c62-ff6112a3631e" (UID: "aecfda8c-69d9-4b35-8c62-ff6112a3631e"). InnerVolumeSpecName "kube-api-access-hc6jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.159117 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "aecfda8c-69d9-4b35-8c62-ff6112a3631e" (UID: "aecfda8c-69d9-4b35-8c62-ff6112a3631e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.180119 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-inventory" (OuterVolumeSpecName: "inventory") pod "aecfda8c-69d9-4b35-8c62-ff6112a3631e" (UID: "aecfda8c-69d9-4b35-8c62-ff6112a3631e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.186685 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "aecfda8c-69d9-4b35-8c62-ff6112a3631e" (UID: "aecfda8c-69d9-4b35-8c62-ff6112a3631e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.194909 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "aecfda8c-69d9-4b35-8c62-ff6112a3631e" (UID: "aecfda8c-69d9-4b35-8c62-ff6112a3631e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.250935 4712 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.250977 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hc6jc\" (UniqueName: \"kubernetes.io/projected/aecfda8c-69d9-4b35-8c62-ff6112a3631e-kube-api-access-hc6jc\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.250992 4712 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.251005 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.251016 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/aecfda8c-69d9-4b35-8c62-ff6112a3631e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.637271 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" event={"ID":"aecfda8c-69d9-4b35-8c62-ff6112a3631e","Type":"ContainerDied","Data":"607ef61dcb5450418008c6ba33375e0a9bc0781d1433406766fe000bb4f0d612"} Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.637612 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="607ef61dcb5450418008c6ba33375e0a9bc0781d1433406766fe000bb4f0d612" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.637305 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-dv9r8" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.639091 4712 generic.go:334] "Generic (PLEG): container finished" podID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerID="6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539" exitCode=0 Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.640911 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerDied","Data":"6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539"} Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.641004 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerStarted","Data":"aff25bcba3d1e0e058b539749020b2e6761ba3316d7a570809a48ff7c6188b02"} Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.641138 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.765544 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6"] Jan 30 17:34:14 crc kubenswrapper[4712]: E0130 17:34:14.765964 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aecfda8c-69d9-4b35-8c62-ff6112a3631e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.765981 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="aecfda8c-69d9-4b35-8c62-ff6112a3631e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.766175 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="aecfda8c-69d9-4b35-8c62-ff6112a3631e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.766738 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.776739 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.776927 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.777104 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.777218 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.777331 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.777430 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.806604 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6"] Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.872549 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.872621 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.872719 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.872764 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.872912 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7p4\" (UniqueName: \"kubernetes.io/projected/65da0015-8187-4b28-8d22-d5b12a920288-kube-api-access-7j7p4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.872942 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.974426 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.974520 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7p4\" (UniqueName: \"kubernetes.io/projected/65da0015-8187-4b28-8d22-d5b12a920288-kube-api-access-7j7p4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.974567 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.974692 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.974744 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.974786 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.979190 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.980275 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.980744 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.986324 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.991011 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:14 crc kubenswrapper[4712]: I0130 17:34:14.994152 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7p4\" (UniqueName: \"kubernetes.io/projected/65da0015-8187-4b28-8d22-d5b12a920288-kube-api-access-7j7p4\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:15 crc kubenswrapper[4712]: I0130 17:34:15.098179 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:34:15 crc kubenswrapper[4712]: I0130 17:34:15.824186 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6"] Jan 30 17:34:16 crc kubenswrapper[4712]: I0130 17:34:16.661875 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerStarted","Data":"987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d"} Jan 30 17:34:16 crc kubenswrapper[4712]: I0130 17:34:16.665427 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" event={"ID":"65da0015-8187-4b28-8d22-d5b12a920288","Type":"ContainerStarted","Data":"aac1c47ac9208ee59e69f52692aff42220f25e5fb7fb290c90193f54b0cbe8b8"} Jan 30 17:34:17 crc kubenswrapper[4712]: I0130 17:34:17.677158 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" event={"ID":"65da0015-8187-4b28-8d22-d5b12a920288","Type":"ContainerStarted","Data":"c392c752372f0f473d3d7a731a14336d2d4fc65e33c627694942e79220d0a320"} Jan 30 17:34:17 crc kubenswrapper[4712]: I0130 17:34:17.711170 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" podStartSLOduration=2.898788713 podStartE2EDuration="3.711130202s" podCreationTimestamp="2026-01-30 17:34:14 +0000 UTC" firstStartedPulling="2026-01-30 17:34:15.820634977 +0000 UTC m=+2392.727644446" lastFinishedPulling="2026-01-30 17:34:16.632976456 +0000 UTC m=+2393.539985935" observedRunningTime="2026-01-30 17:34:17.704099556 +0000 UTC m=+2394.611109025" watchObservedRunningTime="2026-01-30 17:34:17.711130202 +0000 UTC m=+2394.618139661" Jan 30 17:34:18 crc kubenswrapper[4712]: I0130 17:34:18.690717 4712 generic.go:334] "Generic (PLEG): container finished" podID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerID="987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d" exitCode=0 Jan 30 17:34:18 crc kubenswrapper[4712]: I0130 17:34:18.690883 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerDied","Data":"987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d"} Jan 30 17:34:19 crc kubenswrapper[4712]: I0130 17:34:19.701757 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerStarted","Data":"afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9"} Jan 30 17:34:19 crc kubenswrapper[4712]: I0130 17:34:19.743712 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-llvq7" podStartSLOduration=3.225144436 podStartE2EDuration="7.743691418s" podCreationTimestamp="2026-01-30 17:34:12 +0000 UTC" firstStartedPulling="2026-01-30 17:34:14.640925729 +0000 UTC m=+2391.547935198" lastFinishedPulling="2026-01-30 17:34:19.159472671 +0000 UTC m=+2396.066482180" observedRunningTime="2026-01-30 17:34:19.727563476 +0000 UTC m=+2396.634572945" watchObservedRunningTime="2026-01-30 17:34:19.743691418 +0000 UTC m=+2396.650700887" Jan 30 17:34:22 crc kubenswrapper[4712]: I0130 17:34:22.800373 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:34:22 crc kubenswrapper[4712]: E0130 17:34:22.800975 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:34:23 crc kubenswrapper[4712]: I0130 17:34:23.075256 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:23 crc kubenswrapper[4712]: I0130 17:34:23.075349 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:23 crc kubenswrapper[4712]: I0130 17:34:23.140111 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:33 crc kubenswrapper[4712]: I0130 17:34:33.140521 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:33 crc kubenswrapper[4712]: I0130 17:34:33.210440 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llvq7"] Jan 30 17:34:33 crc kubenswrapper[4712]: I0130 17:34:33.836701 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-llvq7" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="registry-server" containerID="cri-o://afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9" gracePeriod=2 Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.359830 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.409644 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpgk\" (UniqueName: \"kubernetes.io/projected/88e02421-ff21-42bf-8e16-0618d9b0fde8-kube-api-access-4dpgk\") pod \"88e02421-ff21-42bf-8e16-0618d9b0fde8\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.409714 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-catalog-content\") pod \"88e02421-ff21-42bf-8e16-0618d9b0fde8\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.409839 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-utilities\") pod \"88e02421-ff21-42bf-8e16-0618d9b0fde8\" (UID: \"88e02421-ff21-42bf-8e16-0618d9b0fde8\") " Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.410529 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-utilities" (OuterVolumeSpecName: "utilities") pod "88e02421-ff21-42bf-8e16-0618d9b0fde8" (UID: "88e02421-ff21-42bf-8e16-0618d9b0fde8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.415436 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88e02421-ff21-42bf-8e16-0618d9b0fde8-kube-api-access-4dpgk" (OuterVolumeSpecName: "kube-api-access-4dpgk") pod "88e02421-ff21-42bf-8e16-0618d9b0fde8" (UID: "88e02421-ff21-42bf-8e16-0618d9b0fde8"). InnerVolumeSpecName "kube-api-access-4dpgk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.462871 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88e02421-ff21-42bf-8e16-0618d9b0fde8" (UID: "88e02421-ff21-42bf-8e16-0618d9b0fde8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.511351 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.511386 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpgk\" (UniqueName: \"kubernetes.io/projected/88e02421-ff21-42bf-8e16-0618d9b0fde8-kube-api-access-4dpgk\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.511396 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88e02421-ff21-42bf-8e16-0618d9b0fde8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.799187 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:34:34 crc kubenswrapper[4712]: E0130 17:34:34.799943 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.846605 4712 generic.go:334] "Generic (PLEG): container finished" podID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerID="afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9" exitCode=0 Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.846645 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerDied","Data":"afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9"} Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.846671 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-llvq7" event={"ID":"88e02421-ff21-42bf-8e16-0618d9b0fde8","Type":"ContainerDied","Data":"aff25bcba3d1e0e058b539749020b2e6761ba3316d7a570809a48ff7c6188b02"} Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.846686 4712 scope.go:117] "RemoveContainer" containerID="afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.846820 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-llvq7" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.870682 4712 scope.go:117] "RemoveContainer" containerID="987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.888832 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-llvq7"] Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.896029 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-llvq7"] Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.909953 4712 scope.go:117] "RemoveContainer" containerID="6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.942634 4712 scope.go:117] "RemoveContainer" containerID="afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9" Jan 30 17:34:34 crc kubenswrapper[4712]: E0130 17:34:34.943030 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9\": container with ID starting with afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9 not found: ID does not exist" containerID="afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.943058 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9"} err="failed to get container status \"afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9\": rpc error: code = NotFound desc = could not find container \"afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9\": container with ID starting with afcba30d3d6df698767e7b1c43a655dea75f7391e215ff7ccb7a83ab5f3324c9 not found: ID does not exist" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.943076 4712 scope.go:117] "RemoveContainer" containerID="987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d" Jan 30 17:34:34 crc kubenswrapper[4712]: E0130 17:34:34.943321 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d\": container with ID starting with 987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d not found: ID does not exist" containerID="987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.943336 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d"} err="failed to get container status \"987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d\": rpc error: code = NotFound desc = could not find container \"987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d\": container with ID starting with 987f4261c923ecbae623405830b10427b0400dfccf78a6bc04d46509ae3b286d not found: ID does not exist" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.943347 4712 scope.go:117] "RemoveContainer" containerID="6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539" Jan 30 17:34:34 crc kubenswrapper[4712]: E0130 17:34:34.943529 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539\": container with ID starting with 6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539 not found: ID does not exist" containerID="6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539" Jan 30 17:34:34 crc kubenswrapper[4712]: I0130 17:34:34.943547 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539"} err="failed to get container status \"6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539\": rpc error: code = NotFound desc = could not find container \"6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539\": container with ID starting with 6f27d3f82e422b8b63204d09c284ff9826c8969c8b5e2636c6696b792f485539 not found: ID does not exist" Jan 30 17:34:35 crc kubenswrapper[4712]: I0130 17:34:35.820162 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" path="/var/lib/kubelet/pods/88e02421-ff21-42bf-8e16-0618d9b0fde8/volumes" Jan 30 17:34:48 crc kubenswrapper[4712]: I0130 17:34:48.799652 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:34:48 crc kubenswrapper[4712]: E0130 17:34:48.800364 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:35:00 crc kubenswrapper[4712]: I0130 17:35:00.800546 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:35:00 crc kubenswrapper[4712]: E0130 17:35:00.801959 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:35:06 crc kubenswrapper[4712]: I0130 17:35:06.163292 4712 generic.go:334] "Generic (PLEG): container finished" podID="65da0015-8187-4b28-8d22-d5b12a920288" containerID="c392c752372f0f473d3d7a731a14336d2d4fc65e33c627694942e79220d0a320" exitCode=0 Jan 30 17:35:06 crc kubenswrapper[4712]: I0130 17:35:06.163395 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" event={"ID":"65da0015-8187-4b28-8d22-d5b12a920288","Type":"ContainerDied","Data":"c392c752372f0f473d3d7a731a14336d2d4fc65e33c627694942e79220d0a320"} Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.609295 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.703592 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j7p4\" (UniqueName: \"kubernetes.io/projected/65da0015-8187-4b28-8d22-d5b12a920288-kube-api-access-7j7p4\") pod \"65da0015-8187-4b28-8d22-d5b12a920288\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.703646 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-nova-metadata-neutron-config-0\") pod \"65da0015-8187-4b28-8d22-d5b12a920288\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.703684 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-inventory\") pod \"65da0015-8187-4b28-8d22-d5b12a920288\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.703723 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-ovn-metadata-agent-neutron-config-0\") pod \"65da0015-8187-4b28-8d22-d5b12a920288\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.703766 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-metadata-combined-ca-bundle\") pod \"65da0015-8187-4b28-8d22-d5b12a920288\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.703935 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-ssh-key-openstack-edpm-ipam\") pod \"65da0015-8187-4b28-8d22-d5b12a920288\" (UID: \"65da0015-8187-4b28-8d22-d5b12a920288\") " Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.718376 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "65da0015-8187-4b28-8d22-d5b12a920288" (UID: "65da0015-8187-4b28-8d22-d5b12a920288"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.753009 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65da0015-8187-4b28-8d22-d5b12a920288-kube-api-access-7j7p4" (OuterVolumeSpecName: "kube-api-access-7j7p4") pod "65da0015-8187-4b28-8d22-d5b12a920288" (UID: "65da0015-8187-4b28-8d22-d5b12a920288"). InnerVolumeSpecName "kube-api-access-7j7p4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.754717 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "65da0015-8187-4b28-8d22-d5b12a920288" (UID: "65da0015-8187-4b28-8d22-d5b12a920288"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.769521 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "65da0015-8187-4b28-8d22-d5b12a920288" (UID: "65da0015-8187-4b28-8d22-d5b12a920288"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.770935 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "65da0015-8187-4b28-8d22-d5b12a920288" (UID: "65da0015-8187-4b28-8d22-d5b12a920288"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.783443 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-inventory" (OuterVolumeSpecName: "inventory") pod "65da0015-8187-4b28-8d22-d5b12a920288" (UID: "65da0015-8187-4b28-8d22-d5b12a920288"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.806075 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.806115 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j7p4\" (UniqueName: \"kubernetes.io/projected/65da0015-8187-4b28-8d22-d5b12a920288-kube-api-access-7j7p4\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.806124 4712 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.806134 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.806145 4712 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:07 crc kubenswrapper[4712]: I0130 17:35:07.806155 4712 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/65da0015-8187-4b28-8d22-d5b12a920288-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.187725 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" event={"ID":"65da0015-8187-4b28-8d22-d5b12a920288","Type":"ContainerDied","Data":"aac1c47ac9208ee59e69f52692aff42220f25e5fb7fb290c90193f54b0cbe8b8"} Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.188251 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aac1c47ac9208ee59e69f52692aff42220f25e5fb7fb290c90193f54b0cbe8b8" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.188365 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.287923 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh"] Jan 30 17:35:08 crc kubenswrapper[4712]: E0130 17:35:08.288295 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="extract-content" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.288312 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="extract-content" Jan 30 17:35:08 crc kubenswrapper[4712]: E0130 17:35:08.288325 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="registry-server" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.288333 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="registry-server" Jan 30 17:35:08 crc kubenswrapper[4712]: E0130 17:35:08.288355 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65da0015-8187-4b28-8d22-d5b12a920288" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.288364 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="65da0015-8187-4b28-8d22-d5b12a920288" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 17:35:08 crc kubenswrapper[4712]: E0130 17:35:08.288371 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="extract-utilities" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.288377 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="extract-utilities" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.288542 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="88e02421-ff21-42bf-8e16-0618d9b0fde8" containerName="registry-server" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.288566 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="65da0015-8187-4b28-8d22-d5b12a920288" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.289163 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.297059 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.297209 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh"] Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.297235 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.297310 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.297436 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.297546 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.417364 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2529\" (UniqueName: \"kubernetes.io/projected/3ab30e70-a942-41a5-ba9f-abd8da406691-kube-api-access-f2529\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.417408 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.417453 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.417485 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.417531 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.519241 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.519362 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2529\" (UniqueName: \"kubernetes.io/projected/3ab30e70-a942-41a5-ba9f-abd8da406691-kube-api-access-f2529\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.519389 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.519445 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.519487 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.523839 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.524668 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.530213 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.544492 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.549052 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2529\" (UniqueName: \"kubernetes.io/projected/3ab30e70-a942-41a5-ba9f-abd8da406691-kube-api-access-f2529\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:08 crc kubenswrapper[4712]: I0130 17:35:08.608233 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:35:09 crc kubenswrapper[4712]: I0130 17:35:09.160195 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh"] Jan 30 17:35:09 crc kubenswrapper[4712]: W0130 17:35:09.176456 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ab30e70_a942_41a5_ba9f_abd8da406691.slice/crio-90fb73a9f6f5203d32dd8639d6fd19606ee042fa9e7f52a7f08ec040363e3642 WatchSource:0}: Error finding container 90fb73a9f6f5203d32dd8639d6fd19606ee042fa9e7f52a7f08ec040363e3642: Status 404 returned error can't find the container with id 90fb73a9f6f5203d32dd8639d6fd19606ee042fa9e7f52a7f08ec040363e3642 Jan 30 17:35:09 crc kubenswrapper[4712]: I0130 17:35:09.197703 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" event={"ID":"3ab30e70-a942-41a5-ba9f-abd8da406691","Type":"ContainerStarted","Data":"90fb73a9f6f5203d32dd8639d6fd19606ee042fa9e7f52a7f08ec040363e3642"} Jan 30 17:35:15 crc kubenswrapper[4712]: I0130 17:35:15.799677 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:35:15 crc kubenswrapper[4712]: E0130 17:35:15.801934 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:35:16 crc kubenswrapper[4712]: I0130 17:35:16.661951 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podUID="16cf8838-73f4-4b47-a0a5-0258974c49db" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:35:17 crc kubenswrapper[4712]: I0130 17:35:17.163425 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 17:35:20 crc kubenswrapper[4712]: I0130 17:35:20.300631 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" event={"ID":"3ab30e70-a942-41a5-ba9f-abd8da406691","Type":"ContainerStarted","Data":"05e8e331dd1804edf2229de87c2dd8c3e6079bc2dd2721d97fc6c0368136e1ba"} Jan 30 17:35:26 crc kubenswrapper[4712]: I0130 17:35:26.800717 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:35:26 crc kubenswrapper[4712]: E0130 17:35:26.801965 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:35:41 crc kubenswrapper[4712]: I0130 17:35:41.800749 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:35:41 crc kubenswrapper[4712]: E0130 17:35:41.801588 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:35:56 crc kubenswrapper[4712]: I0130 17:35:56.799326 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:35:56 crc kubenswrapper[4712]: E0130 17:35:56.800181 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:36:11 crc kubenswrapper[4712]: I0130 17:36:11.810096 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:36:11 crc kubenswrapper[4712]: E0130 17:36:11.811749 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:36:22 crc kubenswrapper[4712]: I0130 17:36:22.886529 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:36:22 crc kubenswrapper[4712]: E0130 17:36:22.887392 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:36:31 crc kubenswrapper[4712]: I0130 17:36:31.243062 4712 patch_prober.go:28] interesting pod/oauth-openshift-544b887855-ts8md container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 17:36:31 crc kubenswrapper[4712]: I0130 17:36:31.243700 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" podUID="385118bd-7569-4940-89a0-ac41cf3395a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 17:36:35 crc kubenswrapper[4712]: I0130 17:36:35.800945 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:36:35 crc kubenswrapper[4712]: E0130 17:36:35.801629 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:36:48 crc kubenswrapper[4712]: I0130 17:36:48.800525 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:36:48 crc kubenswrapper[4712]: E0130 17:36:48.801310 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:36:59 crc kubenswrapper[4712]: I0130 17:36:59.799817 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:36:59 crc kubenswrapper[4712]: E0130 17:36:59.802106 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:37:13 crc kubenswrapper[4712]: I0130 17:37:13.806158 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:37:13 crc kubenswrapper[4712]: E0130 17:37:13.808300 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:37:27 crc kubenswrapper[4712]: I0130 17:37:27.800396 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:37:27 crc kubenswrapper[4712]: E0130 17:37:27.801523 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:37:41 crc kubenswrapper[4712]: I0130 17:37:41.803417 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:37:42 crc kubenswrapper[4712]: I0130 17:37:42.715184 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"0561ffa9d248b1e6773600da541368083c938ae56c58fc79ffe715ad701d3d50"} Jan 30 17:37:42 crc kubenswrapper[4712]: I0130 17:37:42.739371 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" podStartSLOduration=144.561175341 podStartE2EDuration="2m34.73935248s" podCreationTimestamp="2026-01-30 17:35:08 +0000 UTC" firstStartedPulling="2026-01-30 17:35:09.181711307 +0000 UTC m=+2446.088720786" lastFinishedPulling="2026-01-30 17:35:19.359888456 +0000 UTC m=+2456.266897925" observedRunningTime="2026-01-30 17:35:20.327223142 +0000 UTC m=+2457.234232611" watchObservedRunningTime="2026-01-30 17:37:42.73935248 +0000 UTC m=+2599.646361949" Jan 30 17:40:03 crc kubenswrapper[4712]: I0130 17:40:03.068584 4712 generic.go:334] "Generic (PLEG): container finished" podID="3ab30e70-a942-41a5-ba9f-abd8da406691" containerID="05e8e331dd1804edf2229de87c2dd8c3e6079bc2dd2721d97fc6c0368136e1ba" exitCode=0 Jan 30 17:40:03 crc kubenswrapper[4712]: I0130 17:40:03.068879 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" event={"ID":"3ab30e70-a942-41a5-ba9f-abd8da406691","Type":"ContainerDied","Data":"05e8e331dd1804edf2229de87c2dd8c3e6079bc2dd2721d97fc6c0368136e1ba"} Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.566076 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.634830 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-combined-ca-bundle\") pod \"3ab30e70-a942-41a5-ba9f-abd8da406691\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.634876 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-ssh-key-openstack-edpm-ipam\") pod \"3ab30e70-a942-41a5-ba9f-abd8da406691\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.634955 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-inventory\") pod \"3ab30e70-a942-41a5-ba9f-abd8da406691\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.635029 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-secret-0\") pod \"3ab30e70-a942-41a5-ba9f-abd8da406691\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.635116 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2529\" (UniqueName: \"kubernetes.io/projected/3ab30e70-a942-41a5-ba9f-abd8da406691-kube-api-access-f2529\") pod \"3ab30e70-a942-41a5-ba9f-abd8da406691\" (UID: \"3ab30e70-a942-41a5-ba9f-abd8da406691\") " Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.649295 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "3ab30e70-a942-41a5-ba9f-abd8da406691" (UID: "3ab30e70-a942-41a5-ba9f-abd8da406691"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.661874 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab30e70-a942-41a5-ba9f-abd8da406691-kube-api-access-f2529" (OuterVolumeSpecName: "kube-api-access-f2529") pod "3ab30e70-a942-41a5-ba9f-abd8da406691" (UID: "3ab30e70-a942-41a5-ba9f-abd8da406691"). InnerVolumeSpecName "kube-api-access-f2529". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.669035 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-inventory" (OuterVolumeSpecName: "inventory") pod "3ab30e70-a942-41a5-ba9f-abd8da406691" (UID: "3ab30e70-a942-41a5-ba9f-abd8da406691"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.669274 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "3ab30e70-a942-41a5-ba9f-abd8da406691" (UID: "3ab30e70-a942-41a5-ba9f-abd8da406691"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.692133 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3ab30e70-a942-41a5-ba9f-abd8da406691" (UID: "3ab30e70-a942-41a5-ba9f-abd8da406691"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.742198 4712 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.742234 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.742245 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.742258 4712 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/3ab30e70-a942-41a5-ba9f-abd8da406691-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:04 crc kubenswrapper[4712]: I0130 17:40:04.742271 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f2529\" (UniqueName: \"kubernetes.io/projected/3ab30e70-a942-41a5-ba9f-abd8da406691-kube-api-access-f2529\") on node \"crc\" DevicePath \"\"" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.091929 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" event={"ID":"3ab30e70-a942-41a5-ba9f-abd8da406691","Type":"ContainerDied","Data":"90fb73a9f6f5203d32dd8639d6fd19606ee042fa9e7f52a7f08ec040363e3642"} Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.091990 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90fb73a9f6f5203d32dd8639d6fd19606ee042fa9e7f52a7f08ec040363e3642" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.092021 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.273743 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf"] Jan 30 17:40:05 crc kubenswrapper[4712]: E0130 17:40:05.274251 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ab30e70-a942-41a5-ba9f-abd8da406691" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.274277 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ab30e70-a942-41a5-ba9f-abd8da406691" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.274475 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ab30e70-a942-41a5-ba9f-abd8da406691" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.275205 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.278197 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.278489 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.278734 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.278750 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.279051 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.280617 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.281910 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf"] Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.286460 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.371979 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.372305 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.372414 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.372518 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.372602 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.372676 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.372883 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj58r\" (UniqueName: \"kubernetes.io/projected/f6ddcc20-4459-4b3a-8539-8fda3da2c415-kube-api-access-pj58r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.373027 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.373138 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475077 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475455 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475630 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475659 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475732 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475814 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475842 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475867 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.475950 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj58r\" (UniqueName: \"kubernetes.io/projected/f6ddcc20-4459-4b3a-8539-8fda3da2c415-kube-api-access-pj58r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.478286 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.481487 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.481849 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.481934 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.482211 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.483979 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.486507 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.494422 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.497335 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj58r\" (UniqueName: \"kubernetes.io/projected/f6ddcc20-4459-4b3a-8539-8fda3da2c415-kube-api-access-pj58r\") pod \"nova-edpm-deployment-openstack-edpm-ipam-bx7xf\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.632087 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.968577 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf"] Jan 30 17:40:05 crc kubenswrapper[4712]: I0130 17:40:05.977002 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:40:06 crc kubenswrapper[4712]: I0130 17:40:06.101659 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" event={"ID":"f6ddcc20-4459-4b3a-8539-8fda3da2c415","Type":"ContainerStarted","Data":"69bf0bcd0b40a660775efba0616d030349d568f5d5f2e87ba7ce0ddeed88e6a4"} Jan 30 17:40:06 crc kubenswrapper[4712]: I0130 17:40:06.270727 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:40:06 crc kubenswrapper[4712]: I0130 17:40:06.271156 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:40:08 crc kubenswrapper[4712]: I0130 17:40:08.948308 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dtfhc"] Jan 30 17:40:08 crc kubenswrapper[4712]: I0130 17:40:08.950764 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:08 crc kubenswrapper[4712]: I0130 17:40:08.959048 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtfhc"] Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.058549 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-utilities\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.058618 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8c4v\" (UniqueName: \"kubernetes.io/projected/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-kube-api-access-c8c4v\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.058650 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-catalog-content\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.135318 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" event={"ID":"f6ddcc20-4459-4b3a-8539-8fda3da2c415","Type":"ContainerStarted","Data":"5670564188c80fefc05d935aa5a2cc71f01dc5974d8f20ee8ddb478e474b2b1b"} Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.161784 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-utilities\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.161874 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c8c4v\" (UniqueName: \"kubernetes.io/projected/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-kube-api-access-c8c4v\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.161908 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-catalog-content\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.162466 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-catalog-content\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.162683 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-utilities\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.164788 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" podStartSLOduration=1.933721728 podStartE2EDuration="4.164775578s" podCreationTimestamp="2026-01-30 17:40:05 +0000 UTC" firstStartedPulling="2026-01-30 17:40:05.976632547 +0000 UTC m=+2742.883642016" lastFinishedPulling="2026-01-30 17:40:08.207686397 +0000 UTC m=+2745.114695866" observedRunningTime="2026-01-30 17:40:09.161273524 +0000 UTC m=+2746.068282983" watchObservedRunningTime="2026-01-30 17:40:09.164775578 +0000 UTC m=+2746.071785047" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.196986 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c8c4v\" (UniqueName: \"kubernetes.io/projected/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-kube-api-access-c8c4v\") pod \"redhat-operators-dtfhc\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.287638 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:09 crc kubenswrapper[4712]: I0130 17:40:09.853956 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dtfhc"] Jan 30 17:40:10 crc kubenswrapper[4712]: I0130 17:40:10.143773 4712 generic.go:334] "Generic (PLEG): container finished" podID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerID="f7c48acaf3b0ab2408c9187db92eb553921095bee4d1e2b71ccc76c963233f13" exitCode=0 Jan 30 17:40:10 crc kubenswrapper[4712]: I0130 17:40:10.143829 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerDied","Data":"f7c48acaf3b0ab2408c9187db92eb553921095bee4d1e2b71ccc76c963233f13"} Jan 30 17:40:10 crc kubenswrapper[4712]: I0130 17:40:10.144069 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerStarted","Data":"32701069a5da89cf60f3669cc1939b2a749202c3f67d9be879d76a75ee87d898"} Jan 30 17:40:11 crc kubenswrapper[4712]: I0130 17:40:11.155721 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerStarted","Data":"817f77bf82934d15c5e768fb39825e37652c360d9d3a11b094ea38c5f4f972c3"} Jan 30 17:40:36 crc kubenswrapper[4712]: I0130 17:40:36.271349 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:40:36 crc kubenswrapper[4712]: I0130 17:40:36.273150 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:40:43 crc kubenswrapper[4712]: I0130 17:40:43.460246 4712 generic.go:334] "Generic (PLEG): container finished" podID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerID="817f77bf82934d15c5e768fb39825e37652c360d9d3a11b094ea38c5f4f972c3" exitCode=0 Jan 30 17:40:43 crc kubenswrapper[4712]: I0130 17:40:43.460326 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerDied","Data":"817f77bf82934d15c5e768fb39825e37652c360d9d3a11b094ea38c5f4f972c3"} Jan 30 17:40:44 crc kubenswrapper[4712]: I0130 17:40:44.472642 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerStarted","Data":"bc5a49587554c456b213c772ce9b7c808ab677aa77200313ea18d0ad62cc44fa"} Jan 30 17:40:44 crc kubenswrapper[4712]: I0130 17:40:44.505336 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dtfhc" podStartSLOduration=2.62200357 podStartE2EDuration="36.50531055s" podCreationTimestamp="2026-01-30 17:40:08 +0000 UTC" firstStartedPulling="2026-01-30 17:40:10.145310939 +0000 UTC m=+2747.052320408" lastFinishedPulling="2026-01-30 17:40:44.028617919 +0000 UTC m=+2780.935627388" observedRunningTime="2026-01-30 17:40:44.499814257 +0000 UTC m=+2781.406823736" watchObservedRunningTime="2026-01-30 17:40:44.50531055 +0000 UTC m=+2781.412320029" Jan 30 17:40:49 crc kubenswrapper[4712]: I0130 17:40:49.288607 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:49 crc kubenswrapper[4712]: I0130 17:40:49.289240 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:40:50 crc kubenswrapper[4712]: I0130 17:40:50.344898 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:40:50 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:40:50 crc kubenswrapper[4712]: > Jan 30 17:41:00 crc kubenswrapper[4712]: I0130 17:41:00.335149 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:00 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:00 crc kubenswrapper[4712]: > Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.270862 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.271458 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.271516 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.272379 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0561ffa9d248b1e6773600da541368083c938ae56c58fc79ffe715ad701d3d50"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.272506 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://0561ffa9d248b1e6773600da541368083c938ae56c58fc79ffe715ad701d3d50" gracePeriod=600 Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.668569 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="0561ffa9d248b1e6773600da541368083c938ae56c58fc79ffe715ad701d3d50" exitCode=0 Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.668634 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"0561ffa9d248b1e6773600da541368083c938ae56c58fc79ffe715ad701d3d50"} Jan 30 17:41:06 crc kubenswrapper[4712]: I0130 17:41:06.668970 4712 scope.go:117] "RemoveContainer" containerID="27f55a4fcc827d7e5846a8612943379e2490a479d31d597438128691cc43010d" Jan 30 17:41:07 crc kubenswrapper[4712]: I0130 17:41:07.682321 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62"} Jan 30 17:41:10 crc kubenswrapper[4712]: I0130 17:41:10.339933 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:10 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:10 crc kubenswrapper[4712]: > Jan 30 17:41:20 crc kubenswrapper[4712]: I0130 17:41:20.346647 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:20 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:20 crc kubenswrapper[4712]: > Jan 30 17:41:30 crc kubenswrapper[4712]: I0130 17:41:30.343838 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:30 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:30 crc kubenswrapper[4712]: > Jan 30 17:41:40 crc kubenswrapper[4712]: I0130 17:41:40.346626 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:40 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:40 crc kubenswrapper[4712]: > Jan 30 17:41:50 crc kubenswrapper[4712]: I0130 17:41:50.345120 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" probeResult="failure" output=< Jan 30 17:41:50 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:41:50 crc kubenswrapper[4712]: > Jan 30 17:41:59 crc kubenswrapper[4712]: I0130 17:41:59.349912 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:41:59 crc kubenswrapper[4712]: I0130 17:41:59.412867 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:41:59 crc kubenswrapper[4712]: I0130 17:41:59.602922 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtfhc"] Jan 30 17:42:01 crc kubenswrapper[4712]: I0130 17:42:01.174654 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dtfhc" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" containerID="cri-o://bc5a49587554c456b213c772ce9b7c808ab677aa77200313ea18d0ad62cc44fa" gracePeriod=2 Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.188116 4712 generic.go:334] "Generic (PLEG): container finished" podID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerID="bc5a49587554c456b213c772ce9b7c808ab677aa77200313ea18d0ad62cc44fa" exitCode=0 Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.188465 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerDied","Data":"bc5a49587554c456b213c772ce9b7c808ab677aa77200313ea18d0ad62cc44fa"} Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.188494 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dtfhc" event={"ID":"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac","Type":"ContainerDied","Data":"32701069a5da89cf60f3669cc1939b2a749202c3f67d9be879d76a75ee87d898"} Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.188506 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32701069a5da89cf60f3669cc1939b2a749202c3f67d9be879d76a75ee87d898" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.210196 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.319455 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-utilities\") pod \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.319521 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-catalog-content\") pod \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.319909 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c8c4v\" (UniqueName: \"kubernetes.io/projected/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-kube-api-access-c8c4v\") pod \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\" (UID: \"9a298da0-65c0-4e40-ab73-95f8a2c1e4ac\") " Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.320532 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-utilities" (OuterVolumeSpecName: "utilities") pod "9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" (UID: "9a298da0-65c0-4e40-ab73-95f8a2c1e4ac"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.326932 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-kube-api-access-c8c4v" (OuterVolumeSpecName: "kube-api-access-c8c4v") pod "9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" (UID: "9a298da0-65c0-4e40-ab73-95f8a2c1e4ac"). InnerVolumeSpecName "kube-api-access-c8c4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.422510 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.422832 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c8c4v\" (UniqueName: \"kubernetes.io/projected/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-kube-api-access-c8c4v\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.432753 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" (UID: "9a298da0-65c0-4e40-ab73-95f8a2c1e4ac"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:42:02 crc kubenswrapper[4712]: I0130 17:42:02.524376 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:03 crc kubenswrapper[4712]: I0130 17:42:03.194707 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dtfhc" Jan 30 17:42:03 crc kubenswrapper[4712]: I0130 17:42:03.228813 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dtfhc"] Jan 30 17:42:03 crc kubenswrapper[4712]: I0130 17:42:03.239660 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dtfhc"] Jan 30 17:42:03 crc kubenswrapper[4712]: I0130 17:42:03.811472 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" path="/var/lib/kubelet/pods/9a298da0-65c0-4e40-ab73-95f8a2c1e4ac/volumes" Jan 30 17:42:24 crc kubenswrapper[4712]: I0130 17:42:24.516967 4712 generic.go:334] "Generic (PLEG): container finished" podID="f6ddcc20-4459-4b3a-8539-8fda3da2c415" containerID="5670564188c80fefc05d935aa5a2cc71f01dc5974d8f20ee8ddb478e474b2b1b" exitCode=0 Jan 30 17:42:24 crc kubenswrapper[4712]: I0130 17:42:24.517076 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" event={"ID":"f6ddcc20-4459-4b3a-8539-8fda3da2c415","Type":"ContainerDied","Data":"5670564188c80fefc05d935aa5a2cc71f01dc5974d8f20ee8ddb478e474b2b1b"} Jan 30 17:42:25 crc kubenswrapper[4712]: I0130 17:42:25.977105 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.045538 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-combined-ca-bundle\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.045854 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj58r\" (UniqueName: \"kubernetes.io/projected/f6ddcc20-4459-4b3a-8539-8fda3da2c415-kube-api-access-pj58r\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046011 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-0\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046107 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-0\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046304 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-ssh-key-openstack-edpm-ipam\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046414 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-extra-config-0\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046535 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-1\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046605 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-inventory\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.046720 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-1\") pod \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\" (UID: \"f6ddcc20-4459-4b3a-8539-8fda3da2c415\") " Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.070436 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ddcc20-4459-4b3a-8539-8fda3da2c415-kube-api-access-pj58r" (OuterVolumeSpecName: "kube-api-access-pj58r") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "kube-api-access-pj58r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.070722 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.070957 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.087822 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.089676 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-inventory" (OuterVolumeSpecName: "inventory") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.094492 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.096436 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.100989 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.115998 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "f6ddcc20-4459-4b3a-8539-8fda3da2c415" (UID: "f6ddcc20-4459-4b3a-8539-8fda3da2c415"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149383 4712 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149422 4712 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149437 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149448 4712 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149460 4712 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149472 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj58r\" (UniqueName: \"kubernetes.io/projected/f6ddcc20-4459-4b3a-8539-8fda3da2c415-kube-api-access-pj58r\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149480 4712 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149490 4712 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.149502 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f6ddcc20-4459-4b3a-8539-8fda3da2c415-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.534981 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" event={"ID":"f6ddcc20-4459-4b3a-8539-8fda3da2c415","Type":"ContainerDied","Data":"69bf0bcd0b40a660775efba0616d030349d568f5d5f2e87ba7ce0ddeed88e6a4"} Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.535039 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69bf0bcd0b40a660775efba0616d030349d568f5d5f2e87ba7ce0ddeed88e6a4" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.535098 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-bx7xf" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.661101 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv"] Jan 30 17:42:26 crc kubenswrapper[4712]: E0130 17:42:26.661632 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.661660 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" Jan 30 17:42:26 crc kubenswrapper[4712]: E0130 17:42:26.661685 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6ddcc20-4459-4b3a-8539-8fda3da2c415" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.661693 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6ddcc20-4459-4b3a-8539-8fda3da2c415" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 17:42:26 crc kubenswrapper[4712]: E0130 17:42:26.661715 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="extract-utilities" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.661724 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="extract-utilities" Jan 30 17:42:26 crc kubenswrapper[4712]: E0130 17:42:26.661744 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="extract-content" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.661752 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="extract-content" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.662061 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6ddcc20-4459-4b3a-8539-8fda3da2c415" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.662093 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a298da0-65c0-4e40-ab73-95f8a2c1e4ac" containerName="registry-server" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.663035 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.667365 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.667419 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-t6jfh" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.667746 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.667760 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.668576 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.674194 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv"] Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760180 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760244 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760267 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760308 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760372 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pdx4\" (UniqueName: \"kubernetes.io/projected/96e36eb4-2d2a-4803-a882-ff770ce96ffc-kube-api-access-8pdx4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760412 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.760469 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.862889 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pdx4\" (UniqueName: \"kubernetes.io/projected/96e36eb4-2d2a-4803-a882-ff770ce96ffc-kube-api-access-8pdx4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.862982 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.863016 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.863111 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.863146 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.863168 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.863244 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.868083 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.868484 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.869620 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.871445 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.872218 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.878370 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:26 crc kubenswrapper[4712]: I0130 17:42:26.880425 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pdx4\" (UniqueName: \"kubernetes.io/projected/96e36eb4-2d2a-4803-a882-ff770ce96ffc-kube-api-access-8pdx4\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:27 crc kubenswrapper[4712]: I0130 17:42:27.036491 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:42:27 crc kubenswrapper[4712]: I0130 17:42:27.638042 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv"] Jan 30 17:42:28 crc kubenswrapper[4712]: I0130 17:42:28.558170 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" event={"ID":"96e36eb4-2d2a-4803-a882-ff770ce96ffc","Type":"ContainerStarted","Data":"6023c3fcd6278b1ed289df5a7a1ceedecde9cadf171fde1433140ddcb1b9caaa"} Jan 30 17:42:28 crc kubenswrapper[4712]: I0130 17:42:28.558632 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" event={"ID":"96e36eb4-2d2a-4803-a882-ff770ce96ffc","Type":"ContainerStarted","Data":"305290ebdd935d344c464900765624c4257b244f19537428ef2a9f0b31cc7bec"} Jan 30 17:42:28 crc kubenswrapper[4712]: I0130 17:42:28.600185 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" podStartSLOduration=2.138268787 podStartE2EDuration="2.600159717s" podCreationTimestamp="2026-01-30 17:42:26 +0000 UTC" firstStartedPulling="2026-01-30 17:42:27.64865414 +0000 UTC m=+2884.555663619" lastFinishedPulling="2026-01-30 17:42:28.11054506 +0000 UTC m=+2885.017554549" observedRunningTime="2026-01-30 17:42:28.586738515 +0000 UTC m=+2885.493748024" watchObservedRunningTime="2026-01-30 17:42:28.600159717 +0000 UTC m=+2885.507169216" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.359893 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gwnk6"] Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.362846 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.399117 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwnk6"] Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.504370 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-catalog-content\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.504526 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-utilities\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.504588 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7fks\" (UniqueName: \"kubernetes.io/projected/81f042c7-8d37-434d-923f-4c4e64dacec8-kube-api-access-g7fks\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.605872 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7fks\" (UniqueName: \"kubernetes.io/projected/81f042c7-8d37-434d-923f-4c4e64dacec8-kube-api-access-g7fks\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.606216 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-catalog-content\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.606451 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-utilities\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.607048 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-utilities\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.607494 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-catalog-content\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.647225 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7fks\" (UniqueName: \"kubernetes.io/projected/81f042c7-8d37-434d-923f-4c4e64dacec8-kube-api-access-g7fks\") pod \"redhat-marketplace-gwnk6\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:37 crc kubenswrapper[4712]: I0130 17:42:37.689660 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:38 crc kubenswrapper[4712]: I0130 17:42:38.253626 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwnk6"] Jan 30 17:42:38 crc kubenswrapper[4712]: I0130 17:42:38.654023 4712 generic.go:334] "Generic (PLEG): container finished" podID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerID="b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1" exitCode=0 Jan 30 17:42:38 crc kubenswrapper[4712]: I0130 17:42:38.654250 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerDied","Data":"b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1"} Jan 30 17:42:38 crc kubenswrapper[4712]: I0130 17:42:38.654339 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerStarted","Data":"4bd211a425522e484cd2dab73eb1f92faa2a8fb3cef6e42b769cd122d7747a71"} Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.564194 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4z84p"] Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.567694 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.577266 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z84p"] Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.643740 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-utilities\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.648150 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-catalog-content\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.648452 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jfnm2\" (UniqueName: \"kubernetes.io/projected/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-kube-api-access-jfnm2\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.750080 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jfnm2\" (UniqueName: \"kubernetes.io/projected/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-kube-api-access-jfnm2\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.750534 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-utilities\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.751001 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-utilities\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.751127 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-catalog-content\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.751384 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-catalog-content\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.771062 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jfnm2\" (UniqueName: \"kubernetes.io/projected/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-kube-api-access-jfnm2\") pod \"community-operators-4z84p\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:39 crc kubenswrapper[4712]: I0130 17:42:39.889420 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:40 crc kubenswrapper[4712]: I0130 17:42:40.544321 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4z84p"] Jan 30 17:42:40 crc kubenswrapper[4712]: W0130 17:42:40.546679 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3fa6f24_1c6a_431d_a192_20ac6aeeda6a.slice/crio-c548b22e9a7cc32eb899d6cb8db258bfb3ffd60b44dceb1158294fb17d14baba WatchSource:0}: Error finding container c548b22e9a7cc32eb899d6cb8db258bfb3ffd60b44dceb1158294fb17d14baba: Status 404 returned error can't find the container with id c548b22e9a7cc32eb899d6cb8db258bfb3ffd60b44dceb1158294fb17d14baba Jan 30 17:42:40 crc kubenswrapper[4712]: I0130 17:42:40.674807 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerStarted","Data":"c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e"} Jan 30 17:42:40 crc kubenswrapper[4712]: I0130 17:42:40.676456 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerStarted","Data":"c548b22e9a7cc32eb899d6cb8db258bfb3ffd60b44dceb1158294fb17d14baba"} Jan 30 17:42:41 crc kubenswrapper[4712]: I0130 17:42:41.688861 4712 generic.go:334] "Generic (PLEG): container finished" podID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerID="ad422303b64ad51e9d4abfc38ca13f3b3d7a313d519f517c2b4eab8456903b29" exitCode=0 Jan 30 17:42:41 crc kubenswrapper[4712]: I0130 17:42:41.688984 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerDied","Data":"ad422303b64ad51e9d4abfc38ca13f3b3d7a313d519f517c2b4eab8456903b29"} Jan 30 17:42:43 crc kubenswrapper[4712]: I0130 17:42:43.708441 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerStarted","Data":"8724129d08406bdcac91e18b3b8a5bd1c2cc44973cad5b1a89deb93d5239c3b7"} Jan 30 17:42:43 crc kubenswrapper[4712]: I0130 17:42:43.710770 4712 generic.go:334] "Generic (PLEG): container finished" podID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerID="c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e" exitCode=0 Jan 30 17:42:43 crc kubenswrapper[4712]: I0130 17:42:43.710824 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerDied","Data":"c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e"} Jan 30 17:42:46 crc kubenswrapper[4712]: I0130 17:42:46.754134 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerStarted","Data":"d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8"} Jan 30 17:42:46 crc kubenswrapper[4712]: I0130 17:42:46.782238 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gwnk6" podStartSLOduration=2.220763952 podStartE2EDuration="9.782218596s" podCreationTimestamp="2026-01-30 17:42:37 +0000 UTC" firstStartedPulling="2026-01-30 17:42:38.655834944 +0000 UTC m=+2895.562844413" lastFinishedPulling="2026-01-30 17:42:46.217289588 +0000 UTC m=+2903.124299057" observedRunningTime="2026-01-30 17:42:46.771950269 +0000 UTC m=+2903.678959748" watchObservedRunningTime="2026-01-30 17:42:46.782218596 +0000 UTC m=+2903.689228085" Jan 30 17:42:47 crc kubenswrapper[4712]: I0130 17:42:47.691227 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:47 crc kubenswrapper[4712]: I0130 17:42:47.691590 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:42:47 crc kubenswrapper[4712]: I0130 17:42:47.764327 4712 generic.go:334] "Generic (PLEG): container finished" podID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerID="8724129d08406bdcac91e18b3b8a5bd1c2cc44973cad5b1a89deb93d5239c3b7" exitCode=0 Jan 30 17:42:47 crc kubenswrapper[4712]: I0130 17:42:47.764420 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerDied","Data":"8724129d08406bdcac91e18b3b8a5bd1c2cc44973cad5b1a89deb93d5239c3b7"} Jan 30 17:42:48 crc kubenswrapper[4712]: I0130 17:42:48.743087 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gwnk6" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="registry-server" probeResult="failure" output=< Jan 30 17:42:48 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:42:48 crc kubenswrapper[4712]: > Jan 30 17:42:48 crc kubenswrapper[4712]: I0130 17:42:48.777102 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerStarted","Data":"037c145875ae9ead1f0492ea9a1ffc29119bfbfb99a867f48bf010dec48278ca"} Jan 30 17:42:48 crc kubenswrapper[4712]: I0130 17:42:48.812553 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4z84p" podStartSLOduration=3.049952735 podStartE2EDuration="9.812524399s" podCreationTimestamp="2026-01-30 17:42:39 +0000 UTC" firstStartedPulling="2026-01-30 17:42:41.69316558 +0000 UTC m=+2898.600175049" lastFinishedPulling="2026-01-30 17:42:48.455737204 +0000 UTC m=+2905.362746713" observedRunningTime="2026-01-30 17:42:48.79762089 +0000 UTC m=+2905.704630359" watchObservedRunningTime="2026-01-30 17:42:48.812524399 +0000 UTC m=+2905.719533878" Jan 30 17:42:49 crc kubenswrapper[4712]: I0130 17:42:49.890008 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:49 crc kubenswrapper[4712]: I0130 17:42:49.890245 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:42:50 crc kubenswrapper[4712]: I0130 17:42:50.954763 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4z84p" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:42:50 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:42:50 crc kubenswrapper[4712]: > Jan 30 17:42:58 crc kubenswrapper[4712]: I0130 17:42:58.747508 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gwnk6" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="registry-server" probeResult="failure" output=< Jan 30 17:42:58 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:42:58 crc kubenswrapper[4712]: > Jan 30 17:43:00 crc kubenswrapper[4712]: I0130 17:43:00.952902 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-4z84p" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="registry-server" probeResult="failure" output=< Jan 30 17:43:00 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:43:00 crc kubenswrapper[4712]: > Jan 30 17:43:06 crc kubenswrapper[4712]: I0130 17:43:06.271843 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:43:06 crc kubenswrapper[4712]: I0130 17:43:06.272446 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:43:07 crc kubenswrapper[4712]: I0130 17:43:07.745286 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:43:07 crc kubenswrapper[4712]: I0130 17:43:07.792713 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:43:07 crc kubenswrapper[4712]: I0130 17:43:07.981073 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwnk6"] Jan 30 17:43:09 crc kubenswrapper[4712]: I0130 17:43:09.029016 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gwnk6" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="registry-server" containerID="cri-o://d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8" gracePeriod=2 Jan 30 17:43:09 crc kubenswrapper[4712]: I0130 17:43:09.911634 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:43:09 crc kubenswrapper[4712]: I0130 17:43:09.943271 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:43:09 crc kubenswrapper[4712]: I0130 17:43:09.997846 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.039018 4712 generic.go:334] "Generic (PLEG): container finished" podID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerID="d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8" exitCode=0 Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.039064 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gwnk6" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.039085 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerDied","Data":"d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8"} Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.040331 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gwnk6" event={"ID":"81f042c7-8d37-434d-923f-4c4e64dacec8","Type":"ContainerDied","Data":"4bd211a425522e484cd2dab73eb1f92faa2a8fb3cef6e42b769cd122d7747a71"} Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.040420 4712 scope.go:117] "RemoveContainer" containerID="d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.059733 4712 scope.go:117] "RemoveContainer" containerID="c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.063408 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-utilities\") pod \"81f042c7-8d37-434d-923f-4c4e64dacec8\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.063450 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-catalog-content\") pod \"81f042c7-8d37-434d-923f-4c4e64dacec8\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.063607 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g7fks\" (UniqueName: \"kubernetes.io/projected/81f042c7-8d37-434d-923f-4c4e64dacec8-kube-api-access-g7fks\") pod \"81f042c7-8d37-434d-923f-4c4e64dacec8\" (UID: \"81f042c7-8d37-434d-923f-4c4e64dacec8\") " Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.064132 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-utilities" (OuterVolumeSpecName: "utilities") pod "81f042c7-8d37-434d-923f-4c4e64dacec8" (UID: "81f042c7-8d37-434d-923f-4c4e64dacec8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.069937 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81f042c7-8d37-434d-923f-4c4e64dacec8-kube-api-access-g7fks" (OuterVolumeSpecName: "kube-api-access-g7fks") pod "81f042c7-8d37-434d-923f-4c4e64dacec8" (UID: "81f042c7-8d37-434d-923f-4c4e64dacec8"). InnerVolumeSpecName "kube-api-access-g7fks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.086625 4712 scope.go:117] "RemoveContainer" containerID="b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.106430 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "81f042c7-8d37-434d-923f-4c4e64dacec8" (UID: "81f042c7-8d37-434d-923f-4c4e64dacec8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.169289 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.169692 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/81f042c7-8d37-434d-923f-4c4e64dacec8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.169739 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g7fks\" (UniqueName: \"kubernetes.io/projected/81f042c7-8d37-434d-923f-4c4e64dacec8-kube-api-access-g7fks\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.171239 4712 scope.go:117] "RemoveContainer" containerID="d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8" Jan 30 17:43:10 crc kubenswrapper[4712]: E0130 17:43:10.171639 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8\": container with ID starting with d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8 not found: ID does not exist" containerID="d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.171672 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8"} err="failed to get container status \"d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8\": rpc error: code = NotFound desc = could not find container \"d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8\": container with ID starting with d65b4bc30d00d053c3f3a76f88997be10ce050c5cab5e68f6fe87910cda766a8 not found: ID does not exist" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.171694 4712 scope.go:117] "RemoveContainer" containerID="c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e" Jan 30 17:43:10 crc kubenswrapper[4712]: E0130 17:43:10.171966 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e\": container with ID starting with c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e not found: ID does not exist" containerID="c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.171994 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e"} err="failed to get container status \"c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e\": rpc error: code = NotFound desc = could not find container \"c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e\": container with ID starting with c6f41209734c98cf9a3f2f5b70cf1034f616e379a1f13da723acc9f22086d42e not found: ID does not exist" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.172015 4712 scope.go:117] "RemoveContainer" containerID="b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1" Jan 30 17:43:10 crc kubenswrapper[4712]: E0130 17:43:10.172341 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1\": container with ID starting with b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1 not found: ID does not exist" containerID="b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.172369 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1"} err="failed to get container status \"b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1\": rpc error: code = NotFound desc = could not find container \"b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1\": container with ID starting with b9efcd7757e6c7df1d546a202e51fbf92b05422ad410da20fa1474b7c6892da1 not found: ID does not exist" Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.375923 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwnk6"] Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.390185 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gwnk6"] Jan 30 17:43:10 crc kubenswrapper[4712]: I0130 17:43:10.779308 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z84p"] Jan 30 17:43:11 crc kubenswrapper[4712]: I0130 17:43:11.051551 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4z84p" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="registry-server" containerID="cri-o://037c145875ae9ead1f0492ea9a1ffc29119bfbfb99a867f48bf010dec48278ca" gracePeriod=2 Jan 30 17:43:11 crc kubenswrapper[4712]: I0130 17:43:11.810914 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" path="/var/lib/kubelet/pods/81f042c7-8d37-434d-923f-4c4e64dacec8/volumes" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.090934 4712 generic.go:334] "Generic (PLEG): container finished" podID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerID="037c145875ae9ead1f0492ea9a1ffc29119bfbfb99a867f48bf010dec48278ca" exitCode=0 Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.091001 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerDied","Data":"037c145875ae9ead1f0492ea9a1ffc29119bfbfb99a867f48bf010dec48278ca"} Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.091347 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4z84p" event={"ID":"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a","Type":"ContainerDied","Data":"c548b22e9a7cc32eb899d6cb8db258bfb3ffd60b44dceb1158294fb17d14baba"} Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.091365 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c548b22e9a7cc32eb899d6cb8db258bfb3ffd60b44dceb1158294fb17d14baba" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.149617 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.314577 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jfnm2\" (UniqueName: \"kubernetes.io/projected/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-kube-api-access-jfnm2\") pod \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.314683 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-catalog-content\") pod \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.314732 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-utilities\") pod \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\" (UID: \"d3fa6f24-1c6a-431d-a192-20ac6aeeda6a\") " Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.315326 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-utilities" (OuterVolumeSpecName: "utilities") pod "d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" (UID: "d3fa6f24-1c6a-431d-a192-20ac6aeeda6a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.330447 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-kube-api-access-jfnm2" (OuterVolumeSpecName: "kube-api-access-jfnm2") pod "d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" (UID: "d3fa6f24-1c6a-431d-a192-20ac6aeeda6a"). InnerVolumeSpecName "kube-api-access-jfnm2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.385385 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" (UID: "d3fa6f24-1c6a-431d-a192-20ac6aeeda6a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.417019 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jfnm2\" (UniqueName: \"kubernetes.io/projected/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-kube-api-access-jfnm2\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.417070 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:12 crc kubenswrapper[4712]: I0130 17:43:12.417084 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:43:13 crc kubenswrapper[4712]: I0130 17:43:13.101652 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4z84p" Jan 30 17:43:13 crc kubenswrapper[4712]: I0130 17:43:13.141868 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4z84p"] Jan 30 17:43:13 crc kubenswrapper[4712]: I0130 17:43:13.152279 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4z84p"] Jan 30 17:43:13 crc kubenswrapper[4712]: I0130 17:43:13.817452 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" path="/var/lib/kubelet/pods/d3fa6f24-1c6a-431d-a192-20ac6aeeda6a/volumes" Jan 30 17:43:36 crc kubenswrapper[4712]: I0130 17:43:36.272370 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:43:36 crc kubenswrapper[4712]: I0130 17:43:36.273040 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.271220 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.273851 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.274191 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.275613 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.275912 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" gracePeriod=600 Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.601902 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" exitCode=0 Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.601954 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62"} Jan 30 17:44:06 crc kubenswrapper[4712]: I0130 17:44:06.601994 4712 scope.go:117] "RemoveContainer" containerID="0561ffa9d248b1e6773600da541368083c938ae56c58fc79ffe715ad701d3d50" Jan 30 17:44:06 crc kubenswrapper[4712]: E0130 17:44:06.736587 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:44:07 crc kubenswrapper[4712]: I0130 17:44:07.620320 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:44:07 crc kubenswrapper[4712]: E0130 17:44:07.621108 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.712966 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-fjkzt"] Jan 30 17:44:17 crc kubenswrapper[4712]: E0130 17:44:17.713917 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="extract-utilities" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.713936 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="extract-utilities" Jan 30 17:44:17 crc kubenswrapper[4712]: E0130 17:44:17.713963 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="registry-server" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.713971 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="registry-server" Jan 30 17:44:17 crc kubenswrapper[4712]: E0130 17:44:17.713985 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="registry-server" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.713994 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="registry-server" Jan 30 17:44:17 crc kubenswrapper[4712]: E0130 17:44:17.714013 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="extract-content" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.714018 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="extract-content" Jan 30 17:44:17 crc kubenswrapper[4712]: E0130 17:44:17.714034 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="extract-content" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.714041 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="extract-content" Jan 30 17:44:17 crc kubenswrapper[4712]: E0130 17:44:17.714051 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="extract-utilities" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.714057 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="extract-utilities" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.714238 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="81f042c7-8d37-434d-923f-4c4e64dacec8" containerName="registry-server" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.714255 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3fa6f24-1c6a-431d-a192-20ac6aeeda6a" containerName="registry-server" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.723768 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.731856 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjkzt"] Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.746984 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-utilities\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.747058 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw227\" (UniqueName: \"kubernetes.io/projected/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-kube-api-access-gw227\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.747238 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-catalog-content\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.848278 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-utilities\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.848318 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw227\" (UniqueName: \"kubernetes.io/projected/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-kube-api-access-gw227\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.848475 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-catalog-content\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.849338 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-utilities\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.849831 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-catalog-content\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:17 crc kubenswrapper[4712]: I0130 17:44:17.874602 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw227\" (UniqueName: \"kubernetes.io/projected/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-kube-api-access-gw227\") pod \"certified-operators-fjkzt\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:18 crc kubenswrapper[4712]: I0130 17:44:18.049518 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:18 crc kubenswrapper[4712]: I0130 17:44:18.581871 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-fjkzt"] Jan 30 17:44:18 crc kubenswrapper[4712]: I0130 17:44:18.732483 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerStarted","Data":"8a6a519a86f31d9f1da5659b12bb10f6e6148b5e67e124795a27d1fd6bbf969d"} Jan 30 17:44:18 crc kubenswrapper[4712]: I0130 17:44:18.799967 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:44:18 crc kubenswrapper[4712]: E0130 17:44:18.800438 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:44:19 crc kubenswrapper[4712]: I0130 17:44:19.751940 4712 generic.go:334] "Generic (PLEG): container finished" podID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerID="dcb9b28f01614e6ef05ae8898de4b2aa23c1d30280b13649aa87487fc22f9956" exitCode=0 Jan 30 17:44:19 crc kubenswrapper[4712]: I0130 17:44:19.752061 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerDied","Data":"dcb9b28f01614e6ef05ae8898de4b2aa23c1d30280b13649aa87487fc22f9956"} Jan 30 17:44:23 crc kubenswrapper[4712]: I0130 17:44:23.823680 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerStarted","Data":"7b774c06b6dfedf0bc8f8dffe8b971aee34a282f7a7caf6743b66f70f122e81c"} Jan 30 17:44:28 crc kubenswrapper[4712]: I0130 17:44:28.870042 4712 generic.go:334] "Generic (PLEG): container finished" podID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerID="7b774c06b6dfedf0bc8f8dffe8b971aee34a282f7a7caf6743b66f70f122e81c" exitCode=0 Jan 30 17:44:28 crc kubenswrapper[4712]: I0130 17:44:28.870557 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerDied","Data":"7b774c06b6dfedf0bc8f8dffe8b971aee34a282f7a7caf6743b66f70f122e81c"} Jan 30 17:44:30 crc kubenswrapper[4712]: I0130 17:44:30.799483 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:44:30 crc kubenswrapper[4712]: E0130 17:44:30.800277 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:44:30 crc kubenswrapper[4712]: I0130 17:44:30.891207 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerStarted","Data":"51cbb14ea5d504c2c0da38f7f90d659a6b612ae4013c2941ed1b924cb1b1eead"} Jan 30 17:44:30 crc kubenswrapper[4712]: I0130 17:44:30.920390 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-fjkzt" podStartSLOduration=3.760610786 podStartE2EDuration="13.920368274s" podCreationTimestamp="2026-01-30 17:44:17 +0000 UTC" firstStartedPulling="2026-01-30 17:44:19.755730185 +0000 UTC m=+2996.662739664" lastFinishedPulling="2026-01-30 17:44:29.915487683 +0000 UTC m=+3006.822497152" observedRunningTime="2026-01-30 17:44:30.911250674 +0000 UTC m=+3007.818260153" watchObservedRunningTime="2026-01-30 17:44:30.920368274 +0000 UTC m=+3007.827377733" Jan 30 17:44:38 crc kubenswrapper[4712]: I0130 17:44:38.049668 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:38 crc kubenswrapper[4712]: I0130 17:44:38.050234 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:38 crc kubenswrapper[4712]: I0130 17:44:38.143875 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:39 crc kubenswrapper[4712]: I0130 17:44:39.028474 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:39 crc kubenswrapper[4712]: I0130 17:44:39.082292 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjkzt"] Jan 30 17:44:40 crc kubenswrapper[4712]: I0130 17:44:40.982987 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-fjkzt" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="registry-server" containerID="cri-o://51cbb14ea5d504c2c0da38f7f90d659a6b612ae4013c2941ed1b924cb1b1eead" gracePeriod=2 Jan 30 17:44:41 crc kubenswrapper[4712]: I0130 17:44:41.998119 4712 generic.go:334] "Generic (PLEG): container finished" podID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerID="51cbb14ea5d504c2c0da38f7f90d659a6b612ae4013c2941ed1b924cb1b1eead" exitCode=0 Jan 30 17:44:41 crc kubenswrapper[4712]: I0130 17:44:41.998199 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerDied","Data":"51cbb14ea5d504c2c0da38f7f90d659a6b612ae4013c2941ed1b924cb1b1eead"} Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.872122 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.950930 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-utilities\") pod \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.951008 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-catalog-content\") pod \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.951205 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw227\" (UniqueName: \"kubernetes.io/projected/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-kube-api-access-gw227\") pod \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\" (UID: \"87f9345b-a5fb-4a51-9927-c0a6a8bdda90\") " Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.952056 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-utilities" (OuterVolumeSpecName: "utilities") pod "87f9345b-a5fb-4a51-9927-c0a6a8bdda90" (UID: "87f9345b-a5fb-4a51-9927-c0a6a8bdda90"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.955187 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:44:43 crc kubenswrapper[4712]: I0130 17:44:43.964129 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-kube-api-access-gw227" (OuterVolumeSpecName: "kube-api-access-gw227") pod "87f9345b-a5fb-4a51-9927-c0a6a8bdda90" (UID: "87f9345b-a5fb-4a51-9927-c0a6a8bdda90"). InnerVolumeSpecName "kube-api-access-gw227". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.018242 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-fjkzt" event={"ID":"87f9345b-a5fb-4a51-9927-c0a6a8bdda90","Type":"ContainerDied","Data":"8a6a519a86f31d9f1da5659b12bb10f6e6148b5e67e124795a27d1fd6bbf969d"} Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.018325 4712 scope.go:117] "RemoveContainer" containerID="51cbb14ea5d504c2c0da38f7f90d659a6b612ae4013c2941ed1b924cb1b1eead" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.018569 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-fjkzt" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.056829 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw227\" (UniqueName: \"kubernetes.io/projected/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-kube-api-access-gw227\") on node \"crc\" DevicePath \"\"" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.507303 4712 scope.go:117] "RemoveContainer" containerID="7b774c06b6dfedf0bc8f8dffe8b971aee34a282f7a7caf6743b66f70f122e81c" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.536598 4712 scope.go:117] "RemoveContainer" containerID="dcb9b28f01614e6ef05ae8898de4b2aa23c1d30280b13649aa87487fc22f9956" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.545827 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f9345b-a5fb-4a51-9927-c0a6a8bdda90" (UID: "87f9345b-a5fb-4a51-9927-c0a6a8bdda90"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.567854 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f9345b-a5fb-4a51-9927-c0a6a8bdda90-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.669777 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-fjkzt"] Jan 30 17:44:44 crc kubenswrapper[4712]: I0130 17:44:44.677856 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-fjkzt"] Jan 30 17:44:45 crc kubenswrapper[4712]: I0130 17:44:45.799426 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:44:45 crc kubenswrapper[4712]: E0130 17:44:45.799970 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:44:45 crc kubenswrapper[4712]: I0130 17:44:45.813093 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" path="/var/lib/kubelet/pods/87f9345b-a5fb-4a51-9927-c0a6a8bdda90/volumes" Jan 30 17:44:56 crc kubenswrapper[4712]: I0130 17:44:56.799699 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:44:56 crc kubenswrapper[4712]: E0130 17:44:56.800316 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.149227 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj"] Jan 30 17:45:00 crc kubenswrapper[4712]: E0130 17:45:00.150017 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="registry-server" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.150037 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="registry-server" Jan 30 17:45:00 crc kubenswrapper[4712]: E0130 17:45:00.150072 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="extract-utilities" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.150080 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="extract-utilities" Jan 30 17:45:00 crc kubenswrapper[4712]: E0130 17:45:00.150103 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="extract-content" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.150112 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="extract-content" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.150320 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f9345b-a5fb-4a51-9927-c0a6a8bdda90" containerName="registry-server" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.151156 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.154540 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.155310 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.172042 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj"] Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.302345 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6szs\" (UniqueName: \"kubernetes.io/projected/bdc7d161-1ea0-4608-857c-d4c466e90f97-kube-api-access-s6szs\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.302436 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc7d161-1ea0-4608-857c-d4c466e90f97-config-volume\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.302463 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc7d161-1ea0-4608-857c-d4c466e90f97-secret-volume\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.404026 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc7d161-1ea0-4608-857c-d4c466e90f97-config-volume\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.404078 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc7d161-1ea0-4608-857c-d4c466e90f97-secret-volume\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.404274 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6szs\" (UniqueName: \"kubernetes.io/projected/bdc7d161-1ea0-4608-857c-d4c466e90f97-kube-api-access-s6szs\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.404932 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc7d161-1ea0-4608-857c-d4c466e90f97-config-volume\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.417431 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc7d161-1ea0-4608-857c-d4c466e90f97-secret-volume\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.426927 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6szs\" (UniqueName: \"kubernetes.io/projected/bdc7d161-1ea0-4608-857c-d4c466e90f97-kube-api-access-s6szs\") pod \"collect-profiles-29496585-x2cfj\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:00 crc kubenswrapper[4712]: I0130 17:45:00.477780 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:01 crc kubenswrapper[4712]: I0130 17:45:01.023036 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj"] Jan 30 17:45:01 crc kubenswrapper[4712]: I0130 17:45:01.188453 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" event={"ID":"bdc7d161-1ea0-4608-857c-d4c466e90f97","Type":"ContainerStarted","Data":"1b6be0052fcf7fe03e062739fcff00146a422fa2680fb2635379fdb663780714"} Jan 30 17:45:02 crc kubenswrapper[4712]: I0130 17:45:02.202855 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" event={"ID":"bdc7d161-1ea0-4608-857c-d4c466e90f97","Type":"ContainerStarted","Data":"db61a762e5f3cfe2e14bdba4fde2c01d0ad75327e7cfe193de95fa0ca158fd53"} Jan 30 17:45:02 crc kubenswrapper[4712]: I0130 17:45:02.233286 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" podStartSLOduration=2.233260651 podStartE2EDuration="2.233260651s" podCreationTimestamp="2026-01-30 17:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 17:45:02.223827724 +0000 UTC m=+3039.130837233" watchObservedRunningTime="2026-01-30 17:45:02.233260651 +0000 UTC m=+3039.140270130" Jan 30 17:45:03 crc kubenswrapper[4712]: I0130 17:45:03.213393 4712 generic.go:334] "Generic (PLEG): container finished" podID="bdc7d161-1ea0-4608-857c-d4c466e90f97" containerID="db61a762e5f3cfe2e14bdba4fde2c01d0ad75327e7cfe193de95fa0ca158fd53" exitCode=0 Jan 30 17:45:03 crc kubenswrapper[4712]: I0130 17:45:03.213558 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" event={"ID":"bdc7d161-1ea0-4608-857c-d4c466e90f97","Type":"ContainerDied","Data":"db61a762e5f3cfe2e14bdba4fde2c01d0ad75327e7cfe193de95fa0ca158fd53"} Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.613487 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.734109 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc7d161-1ea0-4608-857c-d4c466e90f97-config-volume\") pod \"bdc7d161-1ea0-4608-857c-d4c466e90f97\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.734286 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6szs\" (UniqueName: \"kubernetes.io/projected/bdc7d161-1ea0-4608-857c-d4c466e90f97-kube-api-access-s6szs\") pod \"bdc7d161-1ea0-4608-857c-d4c466e90f97\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.734337 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc7d161-1ea0-4608-857c-d4c466e90f97-secret-volume\") pod \"bdc7d161-1ea0-4608-857c-d4c466e90f97\" (UID: \"bdc7d161-1ea0-4608-857c-d4c466e90f97\") " Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.735088 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bdc7d161-1ea0-4608-857c-d4c466e90f97-config-volume" (OuterVolumeSpecName: "config-volume") pod "bdc7d161-1ea0-4608-857c-d4c466e90f97" (UID: "bdc7d161-1ea0-4608-857c-d4c466e90f97"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.743612 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdc7d161-1ea0-4608-857c-d4c466e90f97-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "bdc7d161-1ea0-4608-857c-d4c466e90f97" (UID: "bdc7d161-1ea0-4608-857c-d4c466e90f97"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.746119 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdc7d161-1ea0-4608-857c-d4c466e90f97-kube-api-access-s6szs" (OuterVolumeSpecName: "kube-api-access-s6szs") pod "bdc7d161-1ea0-4608-857c-d4c466e90f97" (UID: "bdc7d161-1ea0-4608-857c-d4c466e90f97"). InnerVolumeSpecName "kube-api-access-s6szs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.836838 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdc7d161-1ea0-4608-857c-d4c466e90f97-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.836875 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6szs\" (UniqueName: \"kubernetes.io/projected/bdc7d161-1ea0-4608-857c-d4c466e90f97-kube-api-access-s6szs\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:04 crc kubenswrapper[4712]: I0130 17:45:04.836885 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/bdc7d161-1ea0-4608-857c-d4c466e90f97-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:05 crc kubenswrapper[4712]: I0130 17:45:05.233136 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" event={"ID":"bdc7d161-1ea0-4608-857c-d4c466e90f97","Type":"ContainerDied","Data":"1b6be0052fcf7fe03e062739fcff00146a422fa2680fb2635379fdb663780714"} Jan 30 17:45:05 crc kubenswrapper[4712]: I0130 17:45:05.233459 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b6be0052fcf7fe03e062739fcff00146a422fa2680fb2635379fdb663780714" Jan 30 17:45:05 crc kubenswrapper[4712]: I0130 17:45:05.233191 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj" Jan 30 17:45:05 crc kubenswrapper[4712]: I0130 17:45:05.313227 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn"] Jan 30 17:45:05 crc kubenswrapper[4712]: I0130 17:45:05.320936 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496540-wsbpn"] Jan 30 17:45:05 crc kubenswrapper[4712]: I0130 17:45:05.809909 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ca2603d-40c8-4dc1-bc32-c4d549a66184" path="/var/lib/kubelet/pods/8ca2603d-40c8-4dc1-bc32-c4d549a66184/volumes" Jan 30 17:45:08 crc kubenswrapper[4712]: I0130 17:45:08.800288 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:45:08 crc kubenswrapper[4712]: E0130 17:45:08.801682 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:45:21 crc kubenswrapper[4712]: I0130 17:45:21.815337 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:45:21 crc kubenswrapper[4712]: E0130 17:45:21.816857 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:45:22 crc kubenswrapper[4712]: I0130 17:45:22.405188 4712 generic.go:334] "Generic (PLEG): container finished" podID="96e36eb4-2d2a-4803-a882-ff770ce96ffc" containerID="6023c3fcd6278b1ed289df5a7a1ceedecde9cadf171fde1433140ddcb1b9caaa" exitCode=0 Jan 30 17:45:22 crc kubenswrapper[4712]: I0130 17:45:22.405229 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" event={"ID":"96e36eb4-2d2a-4803-a882-ff770ce96ffc","Type":"ContainerDied","Data":"6023c3fcd6278b1ed289df5a7a1ceedecde9cadf171fde1433140ddcb1b9caaa"} Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.874623 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.929619 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pdx4\" (UniqueName: \"kubernetes.io/projected/96e36eb4-2d2a-4803-a882-ff770ce96ffc-kube-api-access-8pdx4\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.929699 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-0\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.929742 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-telemetry-combined-ca-bundle\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.929783 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ssh-key-openstack-edpm-ipam\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.930222 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-1\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.930251 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-2\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.930293 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-inventory\") pod \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\" (UID: \"96e36eb4-2d2a-4803-a882-ff770ce96ffc\") " Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.943226 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96e36eb4-2d2a-4803-a882-ff770ce96ffc-kube-api-access-8pdx4" (OuterVolumeSpecName: "kube-api-access-8pdx4") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "kube-api-access-8pdx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.949382 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.959806 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.962400 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.964424 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-inventory" (OuterVolumeSpecName: "inventory") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.973998 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:23 crc kubenswrapper[4712]: I0130 17:45:23.978284 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "96e36eb4-2d2a-4803-a882-ff770ce96ffc" (UID: "96e36eb4-2d2a-4803-a882-ff770ce96ffc"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037734 4712 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037766 4712 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037781 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037789 4712 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037808 4712 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037819 4712 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/96e36eb4-2d2a-4803-a882-ff770ce96ffc-inventory\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.037828 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pdx4\" (UniqueName: \"kubernetes.io/projected/96e36eb4-2d2a-4803-a882-ff770ce96ffc-kube-api-access-8pdx4\") on node \"crc\" DevicePath \"\"" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.424977 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" event={"ID":"96e36eb4-2d2a-4803-a882-ff770ce96ffc","Type":"ContainerDied","Data":"305290ebdd935d344c464900765624c4257b244f19537428ef2a9f0b31cc7bec"} Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.425414 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="305290ebdd935d344c464900765624c4257b244f19537428ef2a9f0b31cc7bec" Jan 30 17:45:24 crc kubenswrapper[4712]: I0130 17:45:24.425054 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv" Jan 30 17:45:34 crc kubenswrapper[4712]: I0130 17:45:34.800280 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:45:34 crc kubenswrapper[4712]: E0130 17:45:34.801086 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:45:39 crc kubenswrapper[4712]: I0130 17:45:39.363067 4712 scope.go:117] "RemoveContainer" containerID="ba5effc54563181ee3852ad78379920b530a1e62bc07e724c849cc7e59b16add" Jan 30 17:45:45 crc kubenswrapper[4712]: I0130 17:45:45.801999 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:45:45 crc kubenswrapper[4712]: E0130 17:45:45.802707 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:46:00 crc kubenswrapper[4712]: I0130 17:46:00.799647 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:46:00 crc kubenswrapper[4712]: E0130 17:46:00.800588 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:46:15 crc kubenswrapper[4712]: I0130 17:46:15.799581 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:46:15 crc kubenswrapper[4712]: E0130 17:46:15.800520 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.832881 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 30 17:46:21 crc kubenswrapper[4712]: E0130 17:46:21.833919 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdc7d161-1ea0-4608-857c-d4c466e90f97" containerName="collect-profiles" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.833935 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdc7d161-1ea0-4608-857c-d4c466e90f97" containerName="collect-profiles" Jan 30 17:46:21 crc kubenswrapper[4712]: E0130 17:46:21.833965 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96e36eb4-2d2a-4803-a882-ff770ce96ffc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.833975 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="96e36eb4-2d2a-4803-a882-ff770ce96ffc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.834221 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="96e36eb4-2d2a-4803-a882-ff770ce96ffc" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.834241 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdc7d161-1ea0-4608-857c-d4c466e90f97" containerName="collect-profiles" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.834916 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.838020 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.838857 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.840089 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.841490 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fg2lz" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.854332 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.965749 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966036 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/eb9570ef-5465-43b3-8747-1d546402c98a-kube-api-access-qjtfz\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966183 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966453 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966574 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966669 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966771 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.966883 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:21 crc kubenswrapper[4712]: I0130 17:46:21.967181 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.068888 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/eb9570ef-5465-43b3-8747-1d546402c98a-kube-api-access-qjtfz\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069158 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069195 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069223 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069250 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069280 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069304 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069342 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069402 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.069675 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.070121 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.070362 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-config-data\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.070812 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.071025 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.077541 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config-secret\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.078897 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ca-certs\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.084596 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ssh-key\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.087681 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/eb9570ef-5465-43b3-8747-1d546402c98a-kube-api-access-qjtfz\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.103539 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s00-multi-thread-testing\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.156416 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.748472 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s00-multi-thread-testing"] Jan 30 17:46:22 crc kubenswrapper[4712]: I0130 17:46:22.763370 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:46:23 crc kubenswrapper[4712]: I0130 17:46:23.202766 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"eb9570ef-5465-43b3-8747-1d546402c98a","Type":"ContainerStarted","Data":"c568450c8bb696bff7f1ba8c8acf95ff450465198a2d6c19be769ae52f3959c0"} Jan 30 17:46:27 crc kubenswrapper[4712]: I0130 17:46:27.799641 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:46:27 crc kubenswrapper[4712]: E0130 17:46:27.800383 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:46:39 crc kubenswrapper[4712]: I0130 17:46:39.502740 4712 scope.go:117] "RemoveContainer" containerID="f7c48acaf3b0ab2408c9187db92eb553921095bee4d1e2b71ccc76c963233f13" Jan 30 17:46:39 crc kubenswrapper[4712]: I0130 17:46:39.541956 4712 scope.go:117] "RemoveContainer" containerID="817f77bf82934d15c5e768fb39825e37652c360d9d3a11b094ea38c5f4f972c3" Jan 30 17:46:39 crc kubenswrapper[4712]: I0130 17:46:39.799544 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:46:39 crc kubenswrapper[4712]: E0130 17:46:39.800347 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:46:51 crc kubenswrapper[4712]: I0130 17:46:51.799836 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:46:51 crc kubenswrapper[4712]: E0130 17:46:51.800703 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:47:03 crc kubenswrapper[4712]: I0130 17:47:03.816861 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:47:03 crc kubenswrapper[4712]: E0130 17:47:03.817611 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:47:16 crc kubenswrapper[4712]: I0130 17:47:16.799632 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:47:16 crc kubenswrapper[4712]: E0130 17:47:16.800604 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:47:31 crc kubenswrapper[4712]: I0130 17:47:31.801133 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:47:31 crc kubenswrapper[4712]: E0130 17:47:31.805738 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:47:39 crc kubenswrapper[4712]: I0130 17:47:39.618459 4712 scope.go:117] "RemoveContainer" containerID="bc5a49587554c456b213c772ce9b7c808ab677aa77200313ea18d0ad62cc44fa" Jan 30 17:47:46 crc kubenswrapper[4712]: I0130 17:47:46.799683 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:47:46 crc kubenswrapper[4712]: E0130 17:47:46.800598 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:47:50 crc kubenswrapper[4712]: E0130 17:47:50.044274 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4" Jan 30 17:47:50 crc kubenswrapper[4712]: E0130 17:47:50.044632 4712 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4" Jan 30 17:47:50 crc kubenswrapper[4712]: E0130 17:47:50.044841 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjtfz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest-s00-multi-thread-testing_openstack(eb9570ef-5465-43b3-8747-1d546402c98a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 30 17:47:50 crc kubenswrapper[4712]: E0130 17:47:50.046096 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="eb9570ef-5465-43b3-8747-1d546402c98a" Jan 30 17:47:50 crc kubenswrapper[4712]: E0130 17:47:50.092921 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-antelope-centos9/openstack-tempest-all:b85d0548925081ae8c6bdd697658cec4\\\"\"" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podUID="eb9570ef-5465-43b3-8747-1d546402c98a" Jan 30 17:47:58 crc kubenswrapper[4712]: I0130 17:47:58.801135 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:47:58 crc kubenswrapper[4712]: E0130 17:47:58.802147 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:48:05 crc kubenswrapper[4712]: I0130 17:48:05.434992 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 30 17:48:08 crc kubenswrapper[4712]: I0130 17:48:08.269302 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"eb9570ef-5465-43b3-8747-1d546402c98a","Type":"ContainerStarted","Data":"80c7e0d069af7f6959273f8c62cb40ca5256edc75ad08919c53e699d752fc44d"} Jan 30 17:48:08 crc kubenswrapper[4712]: I0130 17:48:08.295282 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" podStartSLOduration=5.627461109 podStartE2EDuration="1m48.295255575s" podCreationTimestamp="2026-01-30 17:46:20 +0000 UTC" firstStartedPulling="2026-01-30 17:46:22.76309651 +0000 UTC m=+3119.670105979" lastFinishedPulling="2026-01-30 17:48:05.430890936 +0000 UTC m=+3222.337900445" observedRunningTime="2026-01-30 17:48:08.285744925 +0000 UTC m=+3225.192754404" watchObservedRunningTime="2026-01-30 17:48:08.295255575 +0000 UTC m=+3225.202265064" Jan 30 17:48:10 crc kubenswrapper[4712]: I0130 17:48:10.800396 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:48:10 crc kubenswrapper[4712]: E0130 17:48:10.800972 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:48:23 crc kubenswrapper[4712]: I0130 17:48:23.814600 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:48:23 crc kubenswrapper[4712]: E0130 17:48:23.815667 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:48:38 crc kubenswrapper[4712]: I0130 17:48:38.800056 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:48:38 crc kubenswrapper[4712]: E0130 17:48:38.800941 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:48:49 crc kubenswrapper[4712]: I0130 17:48:49.800624 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:48:49 crc kubenswrapper[4712]: E0130 17:48:49.801826 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:48:49 crc kubenswrapper[4712]: I0130 17:48:49.952202 4712 scope.go:117] "RemoveContainer" containerID="8724129d08406bdcac91e18b3b8a5bd1c2cc44973cad5b1a89deb93d5239c3b7" Jan 30 17:48:50 crc kubenswrapper[4712]: I0130 17:48:50.024547 4712 scope.go:117] "RemoveContainer" containerID="ad422303b64ad51e9d4abfc38ca13f3b3d7a313d519f517c2b4eab8456903b29" Jan 30 17:48:50 crc kubenswrapper[4712]: I0130 17:48:50.111391 4712 scope.go:117] "RemoveContainer" containerID="037c145875ae9ead1f0492ea9a1ffc29119bfbfb99a867f48bf010dec48278ca" Jan 30 17:49:04 crc kubenswrapper[4712]: I0130 17:49:04.799742 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:49:04 crc kubenswrapper[4712]: E0130 17:49:04.800510 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:49:16 crc kubenswrapper[4712]: I0130 17:49:16.803430 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:49:18 crc kubenswrapper[4712]: I0130 17:49:17.977979 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"efb507e91f0d9ac1411e759cd274d5a503bbd0bf68e4ea7c3dc57a196aeb75e0"} Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.478012 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zg4sq"] Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.520247 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.622593 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36edfc17-99ca-4e05-bf92-d60315860caf-utilities\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.622739 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7x75v\" (UniqueName: \"kubernetes.io/projected/36edfc17-99ca-4e05-bf92-d60315860caf-kube-api-access-7x75v\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.622767 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36edfc17-99ca-4e05-bf92-d60315860caf-catalog-content\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.726334 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36edfc17-99ca-4e05-bf92-d60315860caf-utilities\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.726572 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7x75v\" (UniqueName: \"kubernetes.io/projected/36edfc17-99ca-4e05-bf92-d60315860caf-kube-api-access-7x75v\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.726628 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36edfc17-99ca-4e05-bf92-d60315860caf-catalog-content\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.758908 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36edfc17-99ca-4e05-bf92-d60315860caf-utilities\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.764237 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36edfc17-99ca-4e05-bf92-d60315860caf-catalog-content\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.804905 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7x75v\" (UniqueName: \"kubernetes.io/projected/36edfc17-99ca-4e05-bf92-d60315860caf-kube-api-access-7x75v\") pod \"redhat-operators-zg4sq\" (UID: \"36edfc17-99ca-4e05-bf92-d60315860caf\") " pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.835674 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg4sq"] Jan 30 17:51:34 crc kubenswrapper[4712]: I0130 17:51:34.870375 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:51:36 crc kubenswrapper[4712]: I0130 17:51:36.282391 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:51:36 crc kubenswrapper[4712]: I0130 17:51:36.283763 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:51:38 crc kubenswrapper[4712]: I0130 17:51:38.935412 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg4sq"] Jan 30 17:51:39 crc kubenswrapper[4712]: I0130 17:51:39.796232 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerStarted","Data":"70c5eee838c709601bf4531c77cd7832ea98479ee60b55e13df09f5f71b81380"} Jan 30 17:51:40 crc kubenswrapper[4712]: I0130 17:51:40.808148 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerDied","Data":"adf23fddaa30f9a5fb4dd35a25d9c2473941de42d48960e58ab5028a910d96e8"} Jan 30 17:51:40 crc kubenswrapper[4712]: I0130 17:51:40.809031 4712 generic.go:334] "Generic (PLEG): container finished" podID="36edfc17-99ca-4e05-bf92-d60315860caf" containerID="adf23fddaa30f9a5fb4dd35a25d9c2473941de42d48960e58ab5028a910d96e8" exitCode=0 Jan 30 17:51:40 crc kubenswrapper[4712]: I0130 17:51:40.811167 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:51:58 crc kubenswrapper[4712]: I0130 17:51:58.049943 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 17:51:58 crc kubenswrapper[4712]: E0130 17:51:58.382491 4712 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 30 17:51:58 crc kubenswrapper[4712]: E0130 17:51:58.389849 4712 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7x75v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zg4sq_openshift-marketplace(36edfc17-99ca-4e05-bf92-d60315860caf): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 30 17:51:58 crc kubenswrapper[4712]: E0130 17:51:58.391552 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" Jan 30 17:51:58 crc kubenswrapper[4712]: E0130 17:51:58.986052 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" Jan 30 17:52:06 crc kubenswrapper[4712]: I0130 17:52:06.271483 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:52:06 crc kubenswrapper[4712]: I0130 17:52:06.272186 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:52:13 crc kubenswrapper[4712]: I0130 17:52:13.084040 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerStarted","Data":"7289c888e31b22895a8bbc9126612904a144255a4aa4bab2ecb4ac31da2191f3"} Jan 30 17:52:22 crc kubenswrapper[4712]: I0130 17:52:22.408710 4712 generic.go:334] "Generic (PLEG): container finished" podID="36edfc17-99ca-4e05-bf92-d60315860caf" containerID="7289c888e31b22895a8bbc9126612904a144255a4aa4bab2ecb4ac31da2191f3" exitCode=0 Jan 30 17:52:22 crc kubenswrapper[4712]: I0130 17:52:22.409277 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerDied","Data":"7289c888e31b22895a8bbc9126612904a144255a4aa4bab2ecb4ac31da2191f3"} Jan 30 17:52:23 crc kubenswrapper[4712]: I0130 17:52:23.419113 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerStarted","Data":"ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645"} Jan 30 17:52:23 crc kubenswrapper[4712]: I0130 17:52:23.439546 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zg4sq" podStartSLOduration=7.410670484 podStartE2EDuration="49.439095046s" podCreationTimestamp="2026-01-30 17:51:34 +0000 UTC" firstStartedPulling="2026-01-30 17:51:40.809583678 +0000 UTC m=+3437.716593157" lastFinishedPulling="2026-01-30 17:52:22.83800825 +0000 UTC m=+3479.745017719" observedRunningTime="2026-01-30 17:52:23.434831684 +0000 UTC m=+3480.341841173" watchObservedRunningTime="2026-01-30 17:52:23.439095046 +0000 UTC m=+3480.346104515" Jan 30 17:52:24 crc kubenswrapper[4712]: I0130 17:52:24.872545 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:52:24 crc kubenswrapper[4712]: I0130 17:52:24.873415 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:52:25 crc kubenswrapper[4712]: I0130 17:52:25.916726 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:52:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:52:25 crc kubenswrapper[4712]: > Jan 30 17:52:35 crc kubenswrapper[4712]: I0130 17:52:35.916958 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:52:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:52:35 crc kubenswrapper[4712]: > Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.271403 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.271449 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.271491 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.272228 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"efb507e91f0d9ac1411e759cd274d5a503bbd0bf68e4ea7c3dc57a196aeb75e0"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.272300 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://efb507e91f0d9ac1411e759cd274d5a503bbd0bf68e4ea7c3dc57a196aeb75e0" gracePeriod=600 Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.535335 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="efb507e91f0d9ac1411e759cd274d5a503bbd0bf68e4ea7c3dc57a196aeb75e0" exitCode=0 Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.535381 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"efb507e91f0d9ac1411e759cd274d5a503bbd0bf68e4ea7c3dc57a196aeb75e0"} Jan 30 17:52:36 crc kubenswrapper[4712]: I0130 17:52:36.535717 4712 scope.go:117] "RemoveContainer" containerID="258988991cc97b72cc046c1bf95884aa854ed690c9651529519cbcfc0e55aa62" Jan 30 17:52:37 crc kubenswrapper[4712]: I0130 17:52:37.546004 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df"} Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.398642 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5cmzx"] Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.412832 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.480692 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cmzx"] Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.484192 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-utilities\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.484374 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-catalog-content\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.484468 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6lm\" (UniqueName: \"kubernetes.io/projected/6e6666c3-c8a2-4013-bd07-9500f11c6096-kube-api-access-6l6lm\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.586015 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-utilities\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.586098 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-catalog-content\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.586136 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6l6lm\" (UniqueName: \"kubernetes.io/projected/6e6666c3-c8a2-4013-bd07-9500f11c6096-kube-api-access-6l6lm\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.595204 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-utilities\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.596110 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-catalog-content\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.640525 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6l6lm\" (UniqueName: \"kubernetes.io/projected/6e6666c3-c8a2-4013-bd07-9500f11c6096-kube-api-access-6l6lm\") pod \"community-operators-5cmzx\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:44 crc kubenswrapper[4712]: I0130 17:52:44.749427 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:45 crc kubenswrapper[4712]: I0130 17:52:45.927394 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:52:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:52:45 crc kubenswrapper[4712]: > Jan 30 17:52:46 crc kubenswrapper[4712]: I0130 17:52:46.216347 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5cmzx"] Jan 30 17:52:46 crc kubenswrapper[4712]: W0130 17:52:46.254692 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e6666c3_c8a2_4013_bd07_9500f11c6096.slice/crio-f74f1b0894fc9f79978b7ff2b7f5e9df9503373dda2eb8f69782670cb2a1c5fa WatchSource:0}: Error finding container f74f1b0894fc9f79978b7ff2b7f5e9df9503373dda2eb8f69782670cb2a1c5fa: Status 404 returned error can't find the container with id f74f1b0894fc9f79978b7ff2b7f5e9df9503373dda2eb8f69782670cb2a1c5fa Jan 30 17:52:46 crc kubenswrapper[4712]: I0130 17:52:46.636901 4712 generic.go:334] "Generic (PLEG): container finished" podID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerID="362b9249ed01be967c44cd27face5612d06a0000d3513df26ed860f02d50643b" exitCode=0 Jan 30 17:52:46 crc kubenswrapper[4712]: I0130 17:52:46.637015 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerDied","Data":"362b9249ed01be967c44cd27face5612d06a0000d3513df26ed860f02d50643b"} Jan 30 17:52:46 crc kubenswrapper[4712]: I0130 17:52:46.637509 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerStarted","Data":"f74f1b0894fc9f79978b7ff2b7f5e9df9503373dda2eb8f69782670cb2a1c5fa"} Jan 30 17:52:47 crc kubenswrapper[4712]: I0130 17:52:47.646569 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerStarted","Data":"0fcdf1ad83c4da6cd5100e23d5ef2d809a41a9513750393701afd098252cc20a"} Jan 30 17:52:50 crc kubenswrapper[4712]: I0130 17:52:50.678335 4712 generic.go:334] "Generic (PLEG): container finished" podID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerID="0fcdf1ad83c4da6cd5100e23d5ef2d809a41a9513750393701afd098252cc20a" exitCode=0 Jan 30 17:52:50 crc kubenswrapper[4712]: I0130 17:52:50.678412 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerDied","Data":"0fcdf1ad83c4da6cd5100e23d5ef2d809a41a9513750393701afd098252cc20a"} Jan 30 17:52:51 crc kubenswrapper[4712]: I0130 17:52:51.692659 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerStarted","Data":"6fe4e80f071c00e945ab5e7eb6c3c38657ca25bd949b793dac83c537eed5f51f"} Jan 30 17:52:51 crc kubenswrapper[4712]: I0130 17:52:51.731037 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5cmzx" podStartSLOduration=3.227584191 podStartE2EDuration="7.729062852s" podCreationTimestamp="2026-01-30 17:52:44 +0000 UTC" firstStartedPulling="2026-01-30 17:52:46.638893994 +0000 UTC m=+3503.545903463" lastFinishedPulling="2026-01-30 17:52:51.140372645 +0000 UTC m=+3508.047382124" observedRunningTime="2026-01-30 17:52:51.723286903 +0000 UTC m=+3508.630296372" watchObservedRunningTime="2026-01-30 17:52:51.729062852 +0000 UTC m=+3508.636072331" Jan 30 17:52:54 crc kubenswrapper[4712]: I0130 17:52:54.751178 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:54 crc kubenswrapper[4712]: I0130 17:52:54.751733 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:52:55 crc kubenswrapper[4712]: I0130 17:52:55.817516 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5cmzx" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="registry-server" probeResult="failure" output=< Jan 30 17:52:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:52:55 crc kubenswrapper[4712]: > Jan 30 17:52:55 crc kubenswrapper[4712]: I0130 17:52:55.917645 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:52:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:52:55 crc kubenswrapper[4712]: > Jan 30 17:53:00 crc kubenswrapper[4712]: I0130 17:53:00.910589 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-br6mh"] Jan 30 17:53:00 crc kubenswrapper[4712]: I0130 17:53:00.940788 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:00 crc kubenswrapper[4712]: I0130 17:53:00.968383 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-br6mh"] Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.042758 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-catalog-content\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.042834 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-utilities\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.042976 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhzgh\" (UniqueName: \"kubernetes.io/projected/98fffc02-edac-49f2-8328-95d77acfa779-kube-api-access-xhzgh\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.144938 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhzgh\" (UniqueName: \"kubernetes.io/projected/98fffc02-edac-49f2-8328-95d77acfa779-kube-api-access-xhzgh\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.145038 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-catalog-content\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.145095 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-utilities\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.152720 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-catalog-content\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.154345 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-utilities\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.220437 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhzgh\" (UniqueName: \"kubernetes.io/projected/98fffc02-edac-49f2-8328-95d77acfa779-kube-api-access-xhzgh\") pod \"redhat-marketplace-br6mh\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:01 crc kubenswrapper[4712]: I0130 17:53:01.295669 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:07 crc kubenswrapper[4712]: I0130 17:53:05.827555 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5cmzx" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="registry-server" probeResult="failure" output=< Jan 30 17:53:07 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:53:07 crc kubenswrapper[4712]: > Jan 30 17:53:07 crc kubenswrapper[4712]: I0130 17:53:05.917671 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:53:07 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:53:07 crc kubenswrapper[4712]: > Jan 30 17:53:07 crc kubenswrapper[4712]: I0130 17:53:07.618082 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-br6mh"] Jan 30 17:53:07 crc kubenswrapper[4712]: W0130 17:53:07.685624 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod98fffc02_edac_49f2_8328_95d77acfa779.slice/crio-941ce7e399edb871bffc8cb3e594a5a1f8e8afc47eaa45c705b5f96171bf1cda WatchSource:0}: Error finding container 941ce7e399edb871bffc8cb3e594a5a1f8e8afc47eaa45c705b5f96171bf1cda: Status 404 returned error can't find the container with id 941ce7e399edb871bffc8cb3e594a5a1f8e8afc47eaa45c705b5f96171bf1cda Jan 30 17:53:07 crc kubenswrapper[4712]: I0130 17:53:07.856275 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerStarted","Data":"941ce7e399edb871bffc8cb3e594a5a1f8e8afc47eaa45c705b5f96171bf1cda"} Jan 30 17:53:08 crc kubenswrapper[4712]: I0130 17:53:08.869602 4712 generic.go:334] "Generic (PLEG): container finished" podID="98fffc02-edac-49f2-8328-95d77acfa779" containerID="2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00" exitCode=0 Jan 30 17:53:08 crc kubenswrapper[4712]: I0130 17:53:08.870338 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerDied","Data":"2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00"} Jan 30 17:53:11 crc kubenswrapper[4712]: I0130 17:53:11.906835 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerStarted","Data":"74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a"} Jan 30 17:53:12 crc kubenswrapper[4712]: I0130 17:53:12.923021 4712 generic.go:334] "Generic (PLEG): container finished" podID="98fffc02-edac-49f2-8328-95d77acfa779" containerID="74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a" exitCode=0 Jan 30 17:53:12 crc kubenswrapper[4712]: I0130 17:53:12.923083 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerDied","Data":"74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a"} Jan 30 17:53:14 crc kubenswrapper[4712]: I0130 17:53:14.837048 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:53:14 crc kubenswrapper[4712]: I0130 17:53:14.901347 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:53:14 crc kubenswrapper[4712]: I0130 17:53:14.945890 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerStarted","Data":"80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8"} Jan 30 17:53:14 crc kubenswrapper[4712]: I0130 17:53:14.966347 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-br6mh" podStartSLOduration=10.154846064000001 podStartE2EDuration="14.964678794s" podCreationTimestamp="2026-01-30 17:53:00 +0000 UTC" firstStartedPulling="2026-01-30 17:53:08.874871482 +0000 UTC m=+3525.781880941" lastFinishedPulling="2026-01-30 17:53:13.684704192 +0000 UTC m=+3530.591713671" observedRunningTime="2026-01-30 17:53:14.964251443 +0000 UTC m=+3531.871260912" watchObservedRunningTime="2026-01-30 17:53:14.964678794 +0000 UTC m=+3531.871688263" Jan 30 17:53:15 crc kubenswrapper[4712]: I0130 17:53:15.918199 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:53:15 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:53:15 crc kubenswrapper[4712]: > Jan 30 17:53:16 crc kubenswrapper[4712]: I0130 17:53:16.205764 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cmzx"] Jan 30 17:53:16 crc kubenswrapper[4712]: I0130 17:53:16.208704 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5cmzx" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="registry-server" containerID="cri-o://6fe4e80f071c00e945ab5e7eb6c3c38657ca25bd949b793dac83c537eed5f51f" gracePeriod=2 Jan 30 17:53:16 crc kubenswrapper[4712]: I0130 17:53:16.963781 4712 generic.go:334] "Generic (PLEG): container finished" podID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerID="6fe4e80f071c00e945ab5e7eb6c3c38657ca25bd949b793dac83c537eed5f51f" exitCode=0 Jan 30 17:53:16 crc kubenswrapper[4712]: I0130 17:53:16.963835 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerDied","Data":"6fe4e80f071c00e945ab5e7eb6c3c38657ca25bd949b793dac83c537eed5f51f"} Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.307265 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.399078 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6l6lm\" (UniqueName: \"kubernetes.io/projected/6e6666c3-c8a2-4013-bd07-9500f11c6096-kube-api-access-6l6lm\") pod \"6e6666c3-c8a2-4013-bd07-9500f11c6096\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.399212 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-utilities\") pod \"6e6666c3-c8a2-4013-bd07-9500f11c6096\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.399292 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-catalog-content\") pod \"6e6666c3-c8a2-4013-bd07-9500f11c6096\" (UID: \"6e6666c3-c8a2-4013-bd07-9500f11c6096\") " Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.409557 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-utilities" (OuterVolumeSpecName: "utilities") pod "6e6666c3-c8a2-4013-bd07-9500f11c6096" (UID: "6e6666c3-c8a2-4013-bd07-9500f11c6096"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.435916 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e6666c3-c8a2-4013-bd07-9500f11c6096-kube-api-access-6l6lm" (OuterVolumeSpecName: "kube-api-access-6l6lm") pod "6e6666c3-c8a2-4013-bd07-9500f11c6096" (UID: "6e6666c3-c8a2-4013-bd07-9500f11c6096"). InnerVolumeSpecName "kube-api-access-6l6lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.502129 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6l6lm\" (UniqueName: \"kubernetes.io/projected/6e6666c3-c8a2-4013-bd07-9500f11c6096-kube-api-access-6l6lm\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.502178 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.539988 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6e6666c3-c8a2-4013-bd07-9500f11c6096" (UID: "6e6666c3-c8a2-4013-bd07-9500f11c6096"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.604951 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6e6666c3-c8a2-4013-bd07-9500f11c6096-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.976254 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5cmzx" event={"ID":"6e6666c3-c8a2-4013-bd07-9500f11c6096","Type":"ContainerDied","Data":"f74f1b0894fc9f79978b7ff2b7f5e9df9503373dda2eb8f69782670cb2a1c5fa"} Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.976506 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5cmzx" Jan 30 17:53:17 crc kubenswrapper[4712]: I0130 17:53:17.977295 4712 scope.go:117] "RemoveContainer" containerID="6fe4e80f071c00e945ab5e7eb6c3c38657ca25bd949b793dac83c537eed5f51f" Jan 30 17:53:18 crc kubenswrapper[4712]: I0130 17:53:18.011449 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5cmzx"] Jan 30 17:53:18 crc kubenswrapper[4712]: I0130 17:53:18.019656 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5cmzx"] Jan 30 17:53:18 crc kubenswrapper[4712]: I0130 17:53:18.025156 4712 scope.go:117] "RemoveContainer" containerID="0fcdf1ad83c4da6cd5100e23d5ef2d809a41a9513750393701afd098252cc20a" Jan 30 17:53:18 crc kubenswrapper[4712]: I0130 17:53:18.051348 4712 scope.go:117] "RemoveContainer" containerID="362b9249ed01be967c44cd27face5612d06a0000d3513df26ed860f02d50643b" Jan 30 17:53:19 crc kubenswrapper[4712]: I0130 17:53:19.813859 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" path="/var/lib/kubelet/pods/6e6666c3-c8a2-4013-bd07-9500f11c6096/volumes" Jan 30 17:53:21 crc kubenswrapper[4712]: I0130 17:53:21.296261 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:21 crc kubenswrapper[4712]: I0130 17:53:21.296318 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:22 crc kubenswrapper[4712]: I0130 17:53:22.374988 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-br6mh" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="registry-server" probeResult="failure" output=< Jan 30 17:53:22 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:53:22 crc kubenswrapper[4712]: > Jan 30 17:53:25 crc kubenswrapper[4712]: I0130 17:53:25.919736 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:53:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:53:25 crc kubenswrapper[4712]: > Jan 30 17:53:31 crc kubenswrapper[4712]: I0130 17:53:31.349048 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:31 crc kubenswrapper[4712]: I0130 17:53:31.417984 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:32 crc kubenswrapper[4712]: I0130 17:53:32.852456 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-br6mh"] Jan 30 17:53:33 crc kubenswrapper[4712]: I0130 17:53:33.109478 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-br6mh" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="registry-server" containerID="cri-o://80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8" gracePeriod=2 Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.119650 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.120298 4712 generic.go:334] "Generic (PLEG): container finished" podID="98fffc02-edac-49f2-8328-95d77acfa779" containerID="80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8" exitCode=0 Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.120325 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerDied","Data":"80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8"} Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.121540 4712 scope.go:117] "RemoveContainer" containerID="80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.121451 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-br6mh" event={"ID":"98fffc02-edac-49f2-8328-95d77acfa779","Type":"ContainerDied","Data":"941ce7e399edb871bffc8cb3e594a5a1f8e8afc47eaa45c705b5f96171bf1cda"} Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.178667 4712 scope.go:117] "RemoveContainer" containerID="74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.218347 4712 scope.go:117] "RemoveContainer" containerID="2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.232557 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-utilities\") pod \"98fffc02-edac-49f2-8328-95d77acfa779\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.232670 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhzgh\" (UniqueName: \"kubernetes.io/projected/98fffc02-edac-49f2-8328-95d77acfa779-kube-api-access-xhzgh\") pod \"98fffc02-edac-49f2-8328-95d77acfa779\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.232734 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-catalog-content\") pod \"98fffc02-edac-49f2-8328-95d77acfa779\" (UID: \"98fffc02-edac-49f2-8328-95d77acfa779\") " Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.239907 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-utilities" (OuterVolumeSpecName: "utilities") pod "98fffc02-edac-49f2-8328-95d77acfa779" (UID: "98fffc02-edac-49f2-8328-95d77acfa779"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.260449 4712 scope.go:117] "RemoveContainer" containerID="80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8" Jan 30 17:53:34 crc kubenswrapper[4712]: E0130 17:53:34.273758 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8\": container with ID starting with 80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8 not found: ID does not exist" containerID="80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.273896 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8"} err="failed to get container status \"80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8\": rpc error: code = NotFound desc = could not find container \"80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8\": container with ID starting with 80b4b7410d67a8139e3017a57ab7b4247b29e33a99c85ed9f3039ed7f68de6f8 not found: ID does not exist" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.273984 4712 scope.go:117] "RemoveContainer" containerID="74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a" Jan 30 17:53:34 crc kubenswrapper[4712]: E0130 17:53:34.275071 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a\": container with ID starting with 74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a not found: ID does not exist" containerID="74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.275123 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a"} err="failed to get container status \"74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a\": rpc error: code = NotFound desc = could not find container \"74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a\": container with ID starting with 74266b04e4e8e2865104ffe24ba1be0264e2d6e63697520ddd791bd7699b268a not found: ID does not exist" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.275159 4712 scope.go:117] "RemoveContainer" containerID="2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00" Jan 30 17:53:34 crc kubenswrapper[4712]: E0130 17:53:34.275628 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00\": container with ID starting with 2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00 not found: ID does not exist" containerID="2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.275735 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00"} err="failed to get container status \"2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00\": rpc error: code = NotFound desc = could not find container \"2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00\": container with ID starting with 2e324d903c7d675025c2185e14031744156a200f01bb1ee6b53ce11dbd9d5c00 not found: ID does not exist" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.291657 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98fffc02-edac-49f2-8328-95d77acfa779-kube-api-access-xhzgh" (OuterVolumeSpecName: "kube-api-access-xhzgh") pod "98fffc02-edac-49f2-8328-95d77acfa779" (UID: "98fffc02-edac-49f2-8328-95d77acfa779"). InnerVolumeSpecName "kube-api-access-xhzgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.300645 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98fffc02-edac-49f2-8328-95d77acfa779" (UID: "98fffc02-edac-49f2-8328-95d77acfa779"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.334610 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.334646 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhzgh\" (UniqueName: \"kubernetes.io/projected/98fffc02-edac-49f2-8328-95d77acfa779-kube-api-access-xhzgh\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:34 crc kubenswrapper[4712]: I0130 17:53:34.334656 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98fffc02-edac-49f2-8328-95d77acfa779-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:35 crc kubenswrapper[4712]: I0130 17:53:35.133090 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-br6mh" Jan 30 17:53:35 crc kubenswrapper[4712]: I0130 17:53:35.168296 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-br6mh"] Jan 30 17:53:35 crc kubenswrapper[4712]: I0130 17:53:35.188778 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-br6mh"] Jan 30 17:53:35 crc kubenswrapper[4712]: I0130 17:53:35.818488 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98fffc02-edac-49f2-8328-95d77acfa779" path="/var/lib/kubelet/pods/98fffc02-edac-49f2-8328-95d77acfa779/volumes" Jan 30 17:53:35 crc kubenswrapper[4712]: I0130 17:53:35.923448 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 17:53:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:53:35 crc kubenswrapper[4712]: > Jan 30 17:53:44 crc kubenswrapper[4712]: I0130 17:53:44.957909 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:53:45 crc kubenswrapper[4712]: I0130 17:53:45.009953 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 17:53:45 crc kubenswrapper[4712]: I0130 17:53:45.689820 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zg4sq"] Jan 30 17:53:45 crc kubenswrapper[4712]: I0130 17:53:45.823205 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kpb2d"] Jan 30 17:53:45 crc kubenswrapper[4712]: I0130 17:53:45.825013 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kpb2d" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="registry-server" containerID="cri-o://ed50832769d7bc5e6e03993d5fe9c8d1737e3fb93172cec693284d1a3a0f6fc8" gracePeriod=2 Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.232982 4712 generic.go:334] "Generic (PLEG): container finished" podID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerID="ed50832769d7bc5e6e03993d5fe9c8d1737e3fb93172cec693284d1a3a0f6fc8" exitCode=0 Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.233686 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpb2d" event={"ID":"05eaea30-d33b-4173-a1a3-d5a52ea53da9","Type":"ContainerDied","Data":"ed50832769d7bc5e6e03993d5fe9c8d1737e3fb93172cec693284d1a3a0f6fc8"} Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.785910 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.946556 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qvfhz\" (UniqueName: \"kubernetes.io/projected/05eaea30-d33b-4173-a1a3-d5a52ea53da9-kube-api-access-qvfhz\") pod \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.946964 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-utilities\") pod \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.947059 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-catalog-content\") pod \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\" (UID: \"05eaea30-d33b-4173-a1a3-d5a52ea53da9\") " Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.951506 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-utilities" (OuterVolumeSpecName: "utilities") pod "05eaea30-d33b-4173-a1a3-d5a52ea53da9" (UID: "05eaea30-d33b-4173-a1a3-d5a52ea53da9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:46 crc kubenswrapper[4712]: I0130 17:53:46.960147 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05eaea30-d33b-4173-a1a3-d5a52ea53da9-kube-api-access-qvfhz" (OuterVolumeSpecName: "kube-api-access-qvfhz") pod "05eaea30-d33b-4173-a1a3-d5a52ea53da9" (UID: "05eaea30-d33b-4173-a1a3-d5a52ea53da9"). InnerVolumeSpecName "kube-api-access-qvfhz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.048673 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qvfhz\" (UniqueName: \"kubernetes.io/projected/05eaea30-d33b-4173-a1a3-d5a52ea53da9-kube-api-access-qvfhz\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.048702 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.134558 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05eaea30-d33b-4173-a1a3-d5a52ea53da9" (UID: "05eaea30-d33b-4173-a1a3-d5a52ea53da9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.149962 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05eaea30-d33b-4173-a1a3-d5a52ea53da9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.268097 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kpb2d" event={"ID":"05eaea30-d33b-4173-a1a3-d5a52ea53da9","Type":"ContainerDied","Data":"4710c4bbe76442ca042202827524611bd292f4dda6918f2090fc85001f4e0d0c"} Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.268163 4712 scope.go:117] "RemoveContainer" containerID="ed50832769d7bc5e6e03993d5fe9c8d1737e3fb93172cec693284d1a3a0f6fc8" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.268399 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kpb2d" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.311052 4712 scope.go:117] "RemoveContainer" containerID="abc9c6b4c407bf05d8c0b7e048a4566e1b6f934ebdfc5684e76bda1ffbbbb53a" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.343609 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kpb2d"] Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.352471 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kpb2d"] Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.386417 4712 scope.go:117] "RemoveContainer" containerID="e1870d98edf84c1ea3ea3a1a8ae3e5ac81764991a56f98a2735934d679b914b2" Jan 30 17:53:47 crc kubenswrapper[4712]: I0130 17:53:47.820094 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" path="/var/lib/kubelet/pods/05eaea30-d33b-4173-a1a3-d5a52ea53da9/volumes" Jan 30 17:54:36 crc kubenswrapper[4712]: I0130 17:54:36.274854 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:54:36 crc kubenswrapper[4712]: I0130 17:54:36.277296 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:55:06 crc kubenswrapper[4712]: I0130 17:55:06.271909 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:55:06 crc kubenswrapper[4712]: I0130 17:55:06.273516 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.271062 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.271650 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.273360 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.276257 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.276934 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" gracePeriod=600 Jan 30 17:55:36 crc kubenswrapper[4712]: E0130 17:55:36.418019 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.488775 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" exitCode=0 Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.488841 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df"} Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.492672 4712 scope.go:117] "RemoveContainer" containerID="efb507e91f0d9ac1411e759cd274d5a503bbd0bf68e4ea7c3dc57a196aeb75e0" Jan 30 17:55:36 crc kubenswrapper[4712]: I0130 17:55:36.493567 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:55:36 crc kubenswrapper[4712]: E0130 17:55:36.494277 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:55:47 crc kubenswrapper[4712]: I0130 17:55:47.800623 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:55:47 crc kubenswrapper[4712]: E0130 17:55:47.801351 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:55:59 crc kubenswrapper[4712]: I0130 17:55:59.799646 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:55:59 crc kubenswrapper[4712]: E0130 17:55:59.800492 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:56:13 crc kubenswrapper[4712]: I0130 17:56:13.806941 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:56:13 crc kubenswrapper[4712]: E0130 17:56:13.807865 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:56:26 crc kubenswrapper[4712]: I0130 17:56:26.800079 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:56:26 crc kubenswrapper[4712]: E0130 17:56:26.800922 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.587052 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8gb28"] Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.589865 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="extract-content" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.589890 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="extract-content" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.589910 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.589916 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.589924 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.589930 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.589942 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="extract-utilities" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.589948 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="extract-utilities" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.589959 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.589967 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.589994 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="extract-utilities" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.590001 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="extract-utilities" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.590020 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="extract-content" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.590025 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="extract-content" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.590032 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="extract-utilities" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.590038 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="extract-utilities" Jan 30 17:56:33 crc kubenswrapper[4712]: E0130 17:56:33.590053 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="extract-content" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.590059 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="extract-content" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.591259 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e6666c3-c8a2-4013-bd07-9500f11c6096" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.591612 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="05eaea30-d33b-4173-a1a3-d5a52ea53da9" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.591648 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="98fffc02-edac-49f2-8328-95d77acfa779" containerName="registry-server" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.597180 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.678656 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-catalog-content\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.679026 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-utilities\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.679187 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcfff\" (UniqueName: \"kubernetes.io/projected/1476d67a-81b2-42ff-9b53-c402934ba5a6-kube-api-access-dcfff\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.688680 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gb28"] Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.780936 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-catalog-content\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.781024 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-utilities\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.781057 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dcfff\" (UniqueName: \"kubernetes.io/projected/1476d67a-81b2-42ff-9b53-c402934ba5a6-kube-api-access-dcfff\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.787588 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-utilities\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.787825 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-catalog-content\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.830097 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dcfff\" (UniqueName: \"kubernetes.io/projected/1476d67a-81b2-42ff-9b53-c402934ba5a6-kube-api-access-dcfff\") pod \"certified-operators-8gb28\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:33 crc kubenswrapper[4712]: I0130 17:56:33.929279 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:35 crc kubenswrapper[4712]: I0130 17:56:35.458564 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8gb28"] Jan 30 17:56:36 crc kubenswrapper[4712]: I0130 17:56:36.069174 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerStarted","Data":"6f3283f2131c20bc281ed246872b8666a5913b361ce3bb31525c8cdbb4ac763c"} Jan 30 17:56:37 crc kubenswrapper[4712]: I0130 17:56:37.078667 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerDied","Data":"c955eaf3bcbab829475b563811d1a645fad6e0e40cdffd9bc0256dd0deeb4ff7"} Jan 30 17:56:37 crc kubenswrapper[4712]: I0130 17:56:37.080403 4712 generic.go:334] "Generic (PLEG): container finished" podID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerID="c955eaf3bcbab829475b563811d1a645fad6e0e40cdffd9bc0256dd0deeb4ff7" exitCode=0 Jan 30 17:56:39 crc kubenswrapper[4712]: I0130 17:56:39.114377 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerStarted","Data":"caacd5b33502551f0e03e4b563005ae153f760374f39e028ca167d4ac1b5739c"} Jan 30 17:56:40 crc kubenswrapper[4712]: I0130 17:56:40.799572 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:56:40 crc kubenswrapper[4712]: E0130 17:56:40.800211 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:56:41 crc kubenswrapper[4712]: I0130 17:56:41.132141 4712 generic.go:334] "Generic (PLEG): container finished" podID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerID="caacd5b33502551f0e03e4b563005ae153f760374f39e028ca167d4ac1b5739c" exitCode=0 Jan 30 17:56:41 crc kubenswrapper[4712]: I0130 17:56:41.132214 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerDied","Data":"caacd5b33502551f0e03e4b563005ae153f760374f39e028ca167d4ac1b5739c"} Jan 30 17:56:41 crc kubenswrapper[4712]: I0130 17:56:41.135491 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 17:56:42 crc kubenswrapper[4712]: I0130 17:56:42.152455 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerStarted","Data":"da7d7416d8ce6a13b382c41274ba136230e2b8206e3ea078699bd5c82ae1f739"} Jan 30 17:56:42 crc kubenswrapper[4712]: I0130 17:56:42.184519 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8gb28" podStartSLOduration=4.550785342 podStartE2EDuration="9.183120541s" podCreationTimestamp="2026-01-30 17:56:33 +0000 UTC" firstStartedPulling="2026-01-30 17:56:37.081079372 +0000 UTC m=+3733.988088841" lastFinishedPulling="2026-01-30 17:56:41.713414571 +0000 UTC m=+3738.620424040" observedRunningTime="2026-01-30 17:56:42.18187909 +0000 UTC m=+3739.088888559" watchObservedRunningTime="2026-01-30 17:56:42.183120541 +0000 UTC m=+3739.090130020" Jan 30 17:56:43 crc kubenswrapper[4712]: I0130 17:56:43.930418 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:43 crc kubenswrapper[4712]: I0130 17:56:43.930780 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:56:45 crc kubenswrapper[4712]: I0130 17:56:45.024015 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8gb28" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="registry-server" probeResult="failure" output=< Jan 30 17:56:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:56:45 crc kubenswrapper[4712]: > Jan 30 17:56:54 crc kubenswrapper[4712]: I0130 17:56:54.800204 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:56:54 crc kubenswrapper[4712]: E0130 17:56:54.802457 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:56:54 crc kubenswrapper[4712]: I0130 17:56:54.987226 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8gb28" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="registry-server" probeResult="failure" output=< Jan 30 17:56:54 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 17:56:54 crc kubenswrapper[4712]: > Jan 30 17:57:04 crc kubenswrapper[4712]: I0130 17:57:04.031524 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:57:04 crc kubenswrapper[4712]: I0130 17:57:04.112230 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:57:04 crc kubenswrapper[4712]: I0130 17:57:04.777683 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8gb28"] Jan 30 17:57:05 crc kubenswrapper[4712]: I0130 17:57:05.359757 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8gb28" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="registry-server" containerID="cri-o://da7d7416d8ce6a13b382c41274ba136230e2b8206e3ea078699bd5c82ae1f739" gracePeriod=2 Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.368234 4712 generic.go:334] "Generic (PLEG): container finished" podID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerID="da7d7416d8ce6a13b382c41274ba136230e2b8206e3ea078699bd5c82ae1f739" exitCode=0 Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.368300 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerDied","Data":"da7d7416d8ce6a13b382c41274ba136230e2b8206e3ea078699bd5c82ae1f739"} Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.479853 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.679565 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-utilities\") pod \"1476d67a-81b2-42ff-9b53-c402934ba5a6\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.679652 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dcfff\" (UniqueName: \"kubernetes.io/projected/1476d67a-81b2-42ff-9b53-c402934ba5a6-kube-api-access-dcfff\") pod \"1476d67a-81b2-42ff-9b53-c402934ba5a6\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.679877 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-catalog-content\") pod \"1476d67a-81b2-42ff-9b53-c402934ba5a6\" (UID: \"1476d67a-81b2-42ff-9b53-c402934ba5a6\") " Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.684431 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-utilities" (OuterVolumeSpecName: "utilities") pod "1476d67a-81b2-42ff-9b53-c402934ba5a6" (UID: "1476d67a-81b2-42ff-9b53-c402934ba5a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.709240 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1476d67a-81b2-42ff-9b53-c402934ba5a6-kube-api-access-dcfff" (OuterVolumeSpecName: "kube-api-access-dcfff") pod "1476d67a-81b2-42ff-9b53-c402934ba5a6" (UID: "1476d67a-81b2-42ff-9b53-c402934ba5a6"). InnerVolumeSpecName "kube-api-access-dcfff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.783209 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.783242 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dcfff\" (UniqueName: \"kubernetes.io/projected/1476d67a-81b2-42ff-9b53-c402934ba5a6-kube-api-access-dcfff\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.799847 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:57:06 crc kubenswrapper[4712]: E0130 17:57:06.800415 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.810224 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1476d67a-81b2-42ff-9b53-c402934ba5a6" (UID: "1476d67a-81b2-42ff-9b53-c402934ba5a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 17:57:06 crc kubenswrapper[4712]: I0130 17:57:06.886555 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1476d67a-81b2-42ff-9b53-c402934ba5a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.380195 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8gb28" event={"ID":"1476d67a-81b2-42ff-9b53-c402934ba5a6","Type":"ContainerDied","Data":"6f3283f2131c20bc281ed246872b8666a5913b361ce3bb31525c8cdbb4ac763c"} Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.380254 4712 scope.go:117] "RemoveContainer" containerID="da7d7416d8ce6a13b382c41274ba136230e2b8206e3ea078699bd5c82ae1f739" Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.380255 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8gb28" Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.442750 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8gb28"] Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.446182 4712 scope.go:117] "RemoveContainer" containerID="caacd5b33502551f0e03e4b563005ae153f760374f39e028ca167d4ac1b5739c" Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.452472 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8gb28"] Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.511037 4712 scope.go:117] "RemoveContainer" containerID="c955eaf3bcbab829475b563811d1a645fad6e0e40cdffd9bc0256dd0deeb4ff7" Jan 30 17:57:07 crc kubenswrapper[4712]: I0130 17:57:07.814479 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" path="/var/lib/kubelet/pods/1476d67a-81b2-42ff-9b53-c402934ba5a6/volumes" Jan 30 17:57:20 crc kubenswrapper[4712]: I0130 17:57:20.800283 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:57:20 crc kubenswrapper[4712]: E0130 17:57:20.803964 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:57:34 crc kubenswrapper[4712]: I0130 17:57:34.801309 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:57:34 crc kubenswrapper[4712]: E0130 17:57:34.802438 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:57:45 crc kubenswrapper[4712]: I0130 17:57:45.800159 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:57:45 crc kubenswrapper[4712]: E0130 17:57:45.800942 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:57:56 crc kubenswrapper[4712]: I0130 17:57:56.799702 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:57:56 crc kubenswrapper[4712]: E0130 17:57:56.801322 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:58:08 crc kubenswrapper[4712]: I0130 17:58:08.800640 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:58:08 crc kubenswrapper[4712]: E0130 17:58:08.802439 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:58:19 crc kubenswrapper[4712]: I0130 17:58:19.800636 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:58:19 crc kubenswrapper[4712]: E0130 17:58:19.801294 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:58:22 crc kubenswrapper[4712]: I0130 17:58:22.194959 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 17:58:27 crc kubenswrapper[4712]: I0130 17:58:27.165232 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 17:58:33 crc kubenswrapper[4712]: I0130 17:58:33.809540 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:58:33 crc kubenswrapper[4712]: E0130 17:58:33.810492 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:58:47 crc kubenswrapper[4712]: I0130 17:58:47.802287 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:58:47 crc kubenswrapper[4712]: E0130 17:58:47.803620 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:58:58 crc kubenswrapper[4712]: I0130 17:58:58.800614 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:58:58 crc kubenswrapper[4712]: E0130 17:58:58.801724 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:59:09 crc kubenswrapper[4712]: I0130 17:59:09.799933 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:59:09 crc kubenswrapper[4712]: E0130 17:59:09.800605 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:59:24 crc kubenswrapper[4712]: I0130 17:59:24.808702 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:59:24 crc kubenswrapper[4712]: E0130 17:59:24.813120 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:59:35 crc kubenswrapper[4712]: I0130 17:59:35.803671 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:59:35 crc kubenswrapper[4712]: E0130 17:59:35.809575 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 17:59:48 crc kubenswrapper[4712]: I0130 17:59:48.800902 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 17:59:48 crc kubenswrapper[4712]: E0130 17:59:48.801891 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.276239 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv"] Jan 30 18:00:01 crc kubenswrapper[4712]: E0130 18:00:01.282652 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="extract-content" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.282685 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="extract-content" Jan 30 18:00:01 crc kubenswrapper[4712]: E0130 18:00:01.282747 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="extract-utilities" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.282757 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="extract-utilities" Jan 30 18:00:01 crc kubenswrapper[4712]: E0130 18:00:01.282828 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="registry-server" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.282838 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="registry-server" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.286214 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1476d67a-81b2-42ff-9b53-c402934ba5a6" containerName="registry-server" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.334465 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.391729 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.391825 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.467690 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv"] Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.472300 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cxmt\" (UniqueName: \"kubernetes.io/projected/ad51586a-58c7-4e2e-8098-9e58e9559c5c-kube-api-access-6cxmt\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.472456 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad51586a-58c7-4e2e-8098-9e58e9559c5c-secret-volume\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.472481 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad51586a-58c7-4e2e-8098-9e58e9559c5c-config-volume\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.574416 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad51586a-58c7-4e2e-8098-9e58e9559c5c-secret-volume\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.574460 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad51586a-58c7-4e2e-8098-9e58e9559c5c-config-volume\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.574576 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cxmt\" (UniqueName: \"kubernetes.io/projected/ad51586a-58c7-4e2e-8098-9e58e9559c5c-kube-api-access-6cxmt\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.594057 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad51586a-58c7-4e2e-8098-9e58e9559c5c-config-volume\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.650422 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cxmt\" (UniqueName: \"kubernetes.io/projected/ad51586a-58c7-4e2e-8098-9e58e9559c5c-kube-api-access-6cxmt\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.653264 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad51586a-58c7-4e2e-8098-9e58e9559c5c-secret-volume\") pod \"collect-profiles-29496600-2z9cv\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:01 crc kubenswrapper[4712]: I0130 18:00:01.805627 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:02 crc kubenswrapper[4712]: I0130 18:00:02.800706 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 18:00:02 crc kubenswrapper[4712]: E0130 18:00:02.802619 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:00:03 crc kubenswrapper[4712]: I0130 18:00:03.686535 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv"] Jan 30 18:00:04 crc kubenswrapper[4712]: I0130 18:00:04.356886 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" event={"ID":"ad51586a-58c7-4e2e-8098-9e58e9559c5c","Type":"ContainerStarted","Data":"340e116b884767f98ef42952e9088e368ff9023cf652523a9cf66aa46a832c2f"} Jan 30 18:00:04 crc kubenswrapper[4712]: I0130 18:00:04.357220 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" event={"ID":"ad51586a-58c7-4e2e-8098-9e58e9559c5c","Type":"ContainerStarted","Data":"f3b4215d24f0dd2e0465709b829fd25088b0585ee6260af6c7edd84a7aa63082"} Jan 30 18:00:04 crc kubenswrapper[4712]: I0130 18:00:04.373957 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" podStartSLOduration=4.372653972 podStartE2EDuration="4.372653972s" podCreationTimestamp="2026-01-30 18:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:00:04.372083438 +0000 UTC m=+3941.279092907" watchObservedRunningTime="2026-01-30 18:00:04.372653972 +0000 UTC m=+3941.279663441" Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.163898 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.167181 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.163952 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.167867 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.404516 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" event={"ID":"ad51586a-58c7-4e2e-8098-9e58e9559c5c","Type":"ContainerDied","Data":"340e116b884767f98ef42952e9088e368ff9023cf652523a9cf66aa46a832c2f"} Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.404129 4712 generic.go:334] "Generic (PLEG): container finished" podID="ad51586a-58c7-4e2e-8098-9e58e9559c5c" containerID="340e116b884767f98ef42952e9088e368ff9023cf652523a9cf66aa46a832c2f" exitCode=0 Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.449120 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.449163 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.449241 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:07 crc kubenswrapper[4712]: I0130 18:00:07.449181 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:08 crc kubenswrapper[4712]: I0130 18:00:08.033049 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:08 crc kubenswrapper[4712]: I0130 18:00:08.154090 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:08 crc kubenswrapper[4712]: I0130 18:00:08.154612 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.747810 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.808305 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad51586a-58c7-4e2e-8098-9e58e9559c5c-secret-volume\") pod \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.808365 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cxmt\" (UniqueName: \"kubernetes.io/projected/ad51586a-58c7-4e2e-8098-9e58e9559c5c-kube-api-access-6cxmt\") pod \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.808428 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad51586a-58c7-4e2e-8098-9e58e9559c5c-config-volume\") pod \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\" (UID: \"ad51586a-58c7-4e2e-8098-9e58e9559c5c\") " Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.814954 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad51586a-58c7-4e2e-8098-9e58e9559c5c-config-volume" (OuterVolumeSpecName: "config-volume") pod "ad51586a-58c7-4e2e-8098-9e58e9559c5c" (UID: "ad51586a-58c7-4e2e-8098-9e58e9559c5c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.839602 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ad51586a-58c7-4e2e-8098-9e58e9559c5c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ad51586a-58c7-4e2e-8098-9e58e9559c5c" (UID: "ad51586a-58c7-4e2e-8098-9e58e9559c5c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.847231 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad51586a-58c7-4e2e-8098-9e58e9559c5c-kube-api-access-6cxmt" (OuterVolumeSpecName: "kube-api-access-6cxmt") pod "ad51586a-58c7-4e2e-8098-9e58e9559c5c" (UID: "ad51586a-58c7-4e2e-8098-9e58e9559c5c"). InnerVolumeSpecName "kube-api-access-6cxmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.911278 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ad51586a-58c7-4e2e-8098-9e58e9559c5c-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.911306 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cxmt\" (UniqueName: \"kubernetes.io/projected/ad51586a-58c7-4e2e-8098-9e58e9559c5c-kube-api-access-6cxmt\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:12 crc kubenswrapper[4712]: I0130 18:00:12.911316 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad51586a-58c7-4e2e-8098-9e58e9559c5c-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:00:13 crc kubenswrapper[4712]: I0130 18:00:13.456432 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" Jan 30 18:00:13 crc kubenswrapper[4712]: I0130 18:00:13.459690 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv" event={"ID":"ad51586a-58c7-4e2e-8098-9e58e9559c5c","Type":"ContainerDied","Data":"f3b4215d24f0dd2e0465709b829fd25088b0585ee6260af6c7edd84a7aa63082"} Jan 30 18:00:13 crc kubenswrapper[4712]: I0130 18:00:13.461839 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3b4215d24f0dd2e0465709b829fd25088b0585ee6260af6c7edd84a7aa63082" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.149608 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" podUID="2bc54d51-4f21-479f-a89e-1c60a757433f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.235128 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" podUID="e1a1d497-2276-4248-9bca-1c7038430933" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.276019 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" podUID="6e263552-c0f6-4f24-879f-79895cdbc953" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.404869 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podUID="d3b1d20e-d20c-40f9-9c2b-314aee2fe51e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.576958 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" podUID="cc62b7c7-5521-41df-bf10-d9cc287fbf7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.576996 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-wp89m" podUID="c8354464-6e92-4961-833a-414efe43db13" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.80:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.619054 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podUID="b3222b74-686d-4b44-b521-33fb24c0b403" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.702005 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podUID="16cf8838-73f4-4b47-a0a5-0258974c49db" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.702347 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" podUID="1abbe42a-dbb1-4ec5-8318-451adc608b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.800637 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 18:00:16 crc kubenswrapper[4712]: E0130 18:00:16.801774 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:00:16 crc kubenswrapper[4712]: I0130 18:00:16.998981 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" podUID="adbd0e89-e0e3-46eb-b2c5-4482cc71deae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.004430 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.004422 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.004952 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.004969 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.055963 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.082067 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.082207 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.082049 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.082516 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.236981 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.237004 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.237602 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.237546 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.259613 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.260633 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.259638 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.260902 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.322014 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.322083 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.322275 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.322298 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.430971 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.431025 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.431046 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.431076 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.431112 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.430971 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.869491 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.869521 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.869564 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:17 crc kubenswrapper[4712]: I0130 18:00:17.869582 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.098981 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.098978 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.098978 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.155755 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.174625 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.181961 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:18 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:18 crc kubenswrapper[4712]: > Jan 30 18:00:18 crc kubenswrapper[4712]: I0130 18:00:18.184151 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:18 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:18 crc kubenswrapper[4712]: > Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.105354 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:20 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:20 crc kubenswrapper[4712]: > Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.108046 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:20 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:20 crc kubenswrapper[4712]: > Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.164200 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.164287 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.289592 4712 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.289650 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.723132 4712 patch_prober.go:28] interesting pod/console-69cbc76644-s6m92 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.723207 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-69cbc76644-s6m92" podUID="21b48c74-811e-46ec-a7f4-dbc7702008bf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.843311 4712 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-wg6ft container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.38:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:20 crc kubenswrapper[4712]: I0130 18:00:20.843385 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" podUID="32b6f6bb-fadc-43d5-9046-f2ee1a93d325" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.38:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:21 crc kubenswrapper[4712]: I0130 18:00:21.280011 4712 patch_prober.go:28] interesting pod/oauth-openshift-544b887855-ts8md container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:21 crc kubenswrapper[4712]: I0130 18:00:21.280025 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" podUID="b8cf7519-5513-43e8-98bb-b81e8d7c65e3" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:21 crc kubenswrapper[4712]: I0130 18:00:21.280077 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" podUID="385118bd-7569-4940-89a0-ac41cf3395a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:21 crc kubenswrapper[4712]: I0130 18:00:21.280103 4712 patch_prober.go:28] interesting pod/oauth-openshift-544b887855-ts8md container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:21 crc kubenswrapper[4712]: I0130 18:00:21.280144 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" podUID="385118bd-7569-4940-89a0-ac41cf3395a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:21 crc kubenswrapper[4712]: I0130 18:00:21.280196 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" podUID="b8cf7519-5513-43e8-98bb-b81e8d7c65e3" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.006072 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" podUID="7b99459b-9311-4260-be34-3de859c1e0b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.006072 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" podUID="7b99459b-9311-4260-be34-3de859c1e0b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.159522 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.365000 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" podUID="d4821c16-36e6-43c6-91f1-5fdf29b5b88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.365064 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2" podUID="d4821c16-36e6-43c6-91f1-5fdf29b5b88a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.84:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.542239 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.542313 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.542400 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded" start-of-body= Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.542463 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": context deadline exceeded" Jan 30 18:00:22 crc kubenswrapper[4712]: I0130 18:00:22.872499 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" podUID="d631ea54-82a0-4985-bfe7-776d4764e85e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:24 crc kubenswrapper[4712]: I0130 18:00:24.592282 4712 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-gzvld container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:24 crc kubenswrapper[4712]: I0130 18:00:24.592738 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" podUID="29e89539-b787-4a7e-a75a-9dd9216b3649" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:24 crc kubenswrapper[4712]: I0130 18:00:24.707013 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fp9sk" podUID="240ba5c6-eb36-4da8-913a-f2b61d13293b" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:24 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:24 crc kubenswrapper[4712]: > Jan 30 18:00:24 crc kubenswrapper[4712]: I0130 18:00:24.736311 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd"] Jan 30 18:00:24 crc kubenswrapper[4712]: I0130 18:00:24.779895 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496555-8dxgd"] Jan 30 18:00:24 crc kubenswrapper[4712]: I0130 18:00:24.983850 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fp9sk" podUID="240ba5c6-eb36-4da8-913a-f2b61d13293b" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:24 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:24 crc kubenswrapper[4712]: > Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.357026 4712 patch_prober.go:28] interesting pod/console-operator-58897d9998-t468b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.357095 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-t468b" podUID="76eb6c29-c75b-4e3a-9c21-04b0a6080fe8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.357368 4712 patch_prober.go:28] interesting pod/console-operator-58897d9998-t468b container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.357388 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-t468b" podUID="76eb6c29-c75b-4e3a-9c21-04b0a6080fe8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.423133 4712 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.423239 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.623938 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.623998 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.623938 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.624266 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:25 crc kubenswrapper[4712]: I0130 18:00:25.835943 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7b9ab52-8e89-454b-95d3-bd12c0f96ebb" path="/var/lib/kubelet/pods/f7b9ab52-8e89-454b-95d3-bd12c0f96ebb/volumes" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.329051 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" podUID="5ccbb7b6-e489-4676-8faa-8a0306776a54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.329185 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" podUID="5ccbb7b6-e489-4676-8faa-8a0306776a54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.381183 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:26 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:00:26 crc kubenswrapper[4712]: > Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.383347 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:26 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:00:26 crc kubenswrapper[4712]: > Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.404019 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podUID="d3b1d20e-d20c-40f9-9c2b-314aee2fe51e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.404034 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podUID="d3b1d20e-d20c-40f9-9c2b-314aee2fe51e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.657011 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" podUID="cc62b7c7-5521-41df-bf10-d9cc287fbf7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.657025 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-jkjdt" podUID="cc62b7c7-5521-41df-bf10-d9cc287fbf7f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.74:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.739199 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" podUID="957cefd9-5116-40c3-aaf4-67ba58319ca1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.739278 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" podUID="957cefd9-5116-40c3-aaf4-67ba58319ca1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.822073 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podUID="16cf8838-73f4-4b47-a0a5-0258974c49db" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.822118 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podUID="b3222b74-686d-4b44-b521-33fb24c0b403" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.822139 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podUID="b3222b74-686d-4b44-b521-33fb24c0b403" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:26 crc kubenswrapper[4712]: I0130 18:00:26.822167 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podUID="16cf8838-73f4-4b47-a0a5-0258974c49db" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.176010 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" podUID="d37f95a0-af87-4727-83a4-aa6334b0759e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.176168 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.177718 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.177753 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.177853 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" podUID="d37f95a0-af87-4727-83a4-aa6334b0759e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.177970 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.179689 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.179945 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.180113 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.189012 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.189030 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.189098 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.189020 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.189196 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.192945 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.189077 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.198935 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.224315 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"05f6854f90ffa10a27ff5351f9fa3c08a2daedb83745bb726fd7c092aaf91363"} pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.227530 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" containerID="cri-o://05f6854f90ffa10a27ff5351f9fa3c08a2daedb83745bb726fd7c092aaf91363" gracePeriod=30 Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.276954 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.276998 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.277012 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.277046 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.277095 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.277151 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.277159 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.277188 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.360972 4712 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xq27f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.361020 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.361036 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" podUID="68eec877-dde8-4b0b-8e78-53a70af78240" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.360971 4712 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xq27f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.361080 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.361099 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" podUID="68eec877-dde8-4b0b-8e78-53a70af78240" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.361089 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.361131 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443021 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443078 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443089 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443118 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443145 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443155 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.443031 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.452910 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"8f32cca356368e1d90f906c7b065989ca60b1ed76d2d68439d0e10e71b432710"} pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.452973 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" containerID="cri-o://8f32cca356368e1d90f906c7b065989ca60b1ed76d2d68439d0e10e71b432710" gracePeriod=30 Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.871009 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.871436 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.871023 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:27 crc kubenswrapper[4712]: I0130 18:00:27.871676 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.117014 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.117062 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.117353 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.117412 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.119143 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"6be33609f8e8ec7a5896d0a7defbd7680935427d1544f6bd7000f359641cb3c4"} pod="metallb-system/frr-k8s-j9bpz" containerMessage="Container frr failed liveness probe, will be restarted" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.119245 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="frr" containerID="cri-o://6be33609f8e8ec7a5896d0a7defbd7680935427d1544f6bd7000f359641cb3c4" gracePeriod=2 Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.156308 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.156387 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.156448 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.157689 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.161622 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.649244 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.649331 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.649456 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.706083 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" podUID="055ca335-cbe6-4ef8-af90-fb2d995a3187" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.706091 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.706560 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.706594 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.706145 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" podUID="055ca335-cbe6-4ef8-af90-fb2d995a3187" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.738784 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"2e80d1cd02950c7d480bad14a1a609a4d2ac4caf1c989f6682a73e80934209f5"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.738890 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" containerID="cri-o://2e80d1cd02950c7d480bad14a1a609a4d2ac4caf1c989f6682a73e80934209f5" gracePeriod=30 Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.789247 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-gmjr9" podUID="f5e77c2d-c85b-44c7-ae02-074b491daf83" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.789659 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-gmjr9" podUID="f5e77c2d-c85b-44c7-ae02-074b491daf83" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:28 crc kubenswrapper[4712]: I0130 18:00:28.806941 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:29 crc kubenswrapper[4712]: I0130 18:00:29.861233 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerDied","Data":"6be33609f8e8ec7a5896d0a7defbd7680935427d1544f6bd7000f359641cb3c4"} Jan 30 18:00:29 crc kubenswrapper[4712]: I0130 18:00:29.873312 4712 generic.go:334] "Generic (PLEG): container finished" podID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerID="6be33609f8e8ec7a5896d0a7defbd7680935427d1544f6bd7000f359641cb3c4" exitCode=143 Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.152946 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.155471 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.320514 4712 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.320786 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.662200 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.662281 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.722734 4712 patch_prober.go:28] interesting pod/console-69cbc76644-s6m92 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.722811 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-69cbc76644-s6m92" podUID="21b48c74-811e-46ec-a7f4-dbc7702008bf" containerName="console" probeResult="failure" output="Get \"https://10.217.0.44:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.885051 4712 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-wg6ft container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.38:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.885117 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-wg6ft" podUID="32b6f6bb-fadc-43d5-9046-f2ee1a93d325" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.38:9443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.928539 4712 generic.go:334] "Generic (PLEG): container finished" podID="18f1f168-60eb-4666-9d2f-7455021a946c" containerID="8f32cca356368e1d90f906c7b065989ca60b1ed76d2d68439d0e10e71b432710" exitCode=0 Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.928617 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" event={"ID":"18f1f168-60eb-4666-9d2f-7455021a946c","Type":"ContainerDied","Data":"8f32cca356368e1d90f906c7b065989ca60b1ed76d2d68439d0e10e71b432710"} Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.996232 4712 generic.go:334] "Generic (PLEG): container finished" podID="a5836457-3db5-41ec-b036-057186d44de8" containerID="2e80d1cd02950c7d480bad14a1a609a4d2ac4caf1c989f6682a73e80934209f5" exitCode=0 Jan 30 18:00:30 crc kubenswrapper[4712]: I0130 18:00:30.996543 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerDied","Data":"2e80d1cd02950c7d480bad14a1a609a4d2ac4caf1c989f6682a73e80934209f5"} Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.157734 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.160843 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.278998 4712 patch_prober.go:28] interesting pod/oauth-openshift-544b887855-ts8md container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.279046 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" podUID="385118bd-7569-4940-89a0-ac41cf3395a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.279043 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" podUID="b8cf7519-5513-43e8-98bb-b81e8d7c65e3" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.279086 4712 patch_prober.go:28] interesting pod/oauth-openshift-544b887855-ts8md container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.279116 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-544b887855-ts8md" podUID="385118bd-7569-4940-89a0-ac41cf3395a2" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.64:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.279157 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" podUID="b8cf7519-5513-43e8-98bb-b81e8d7c65e3" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.820136 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 18:00:31 crc kubenswrapper[4712]: E0130 18:00:31.843514 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.849614 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-bs7pg" podUID="eaba725b-6442-4a5b-adc9-16047823dc86" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:31 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:31 crc kubenswrapper[4712]: > Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.853926 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-bs7pg" podUID="eaba725b-6442-4a5b-adc9-16047823dc86" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:31 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:31 crc kubenswrapper[4712]: > Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.854016 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-dnfsb" podUID="7fe1585c-9bff-482c-a2b9-ccbb10a11300" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:31 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:31 crc kubenswrapper[4712]: > Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.857399 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-dnfsb" podUID="7fe1585c-9bff-482c-a2b9-ccbb10a11300" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:31 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:31 crc kubenswrapper[4712]: > Jan 30 18:00:31 crc kubenswrapper[4712]: I0130 18:00:31.966099 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-79955696d6-lwlhf" podUID="7b99459b-9311-4260-be34-3de859c1e0b0" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.76:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.156962 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.157364 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.219978 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"d7c2847e6873da314843f10f5a1edc47d102f60f3f89eab53cd78ef02a17e642"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.350402 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" containerID="cri-o://d7c2847e6873da314843f10f5a1edc47d102f60f3f89eab53cd78ef02a17e642" gracePeriod=30 Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.892283 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-fp9sk" podUID="240ba5c6-eb36-4da8-913a-f2b61d13293b" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:32 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:32 crc kubenswrapper[4712]: > Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.892735 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-fp9sk" podUID="240ba5c6-eb36-4da8-913a-f2b61d13293b" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:32 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:32 crc kubenswrapper[4712]: > Jan 30 18:00:32 crc kubenswrapper[4712]: I0130 18:00:32.910111 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-659668d854-w9hqw" podUID="15028a9a-8618-4d65-89ff-d8b06f63821f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.91:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:33 crc kubenswrapper[4712]: I0130 18:00:33.043744 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"99572853c62ac0293a35d773392a52bc463f186d2cb9f154684d6bf19c0302af"} Jan 30 18:00:33 crc kubenswrapper[4712]: I0130 18:00:33.541750 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:33 crc kubenswrapper[4712]: I0130 18:00:33.541844 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:33 crc kubenswrapper[4712]: I0130 18:00:33.909097 4712 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.366855576s: [/var/lib/containers/storage/overlay/03c4f2ce78b91ae55779f7bf796f07631e061459aa4e2a0ed869d1a7cba5c825/diff /var/log/pods/openstack_horizon-64655dbc44-pvj2c_6a28b495-ecf0-409e-9558-ee794a46dbd1/horizon/4.log]; will not log again for this container unless duration exceeds 3s Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.068038 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" event={"ID":"18f1f168-60eb-4666-9d2f-7455021a946c","Type":"ContainerStarted","Data":"d38e9beb83ad9e5212948bddadded0067812f5df8650d0f865cb438a95ce330a"} Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.077144 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.077219 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.094407 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.096130 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerStarted","Data":"7ac717060f77f42d57cd4c7d3e9817d7bb2a8cdc6f228a95cc0647d2f24b5238"} Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.096212 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.172365 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" podUID="d631ea54-82a0-4985-bfe7-776d4764e85e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.639947 4712 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-gzvld container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:34 crc kubenswrapper[4712]: I0130 18:00:34.640218 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-gzvld" podUID="29e89539-b787-4a7e-a75a-9dd9216b3649" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.104425 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.104470 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.356971 4712 patch_prober.go:28] interesting pod/console-operator-58897d9998-t468b container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.357672 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-t468b" podUID="76eb6c29-c75b-4e3a-9c21-04b0a6080fe8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.356981 4712 patch_prober.go:28] interesting pod/console-operator-58897d9998-t468b container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.357763 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-t468b" podUID="76eb6c29-c75b-4e3a-9c21-04b0a6080fe8" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.424740 4712 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.424844 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:35 crc kubenswrapper[4712]: I0130 18:00:35.869312 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-7h5tl" podUID="d631ea54-82a0-4985-bfe7-776d4764e85e" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.143087 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-tfxdt" podUID="2bc54d51-4f21-479f-a89e-1c60a757433f" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.57:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.143457 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.143522 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.275081 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-2xxnh" podUID="b8cf7519-5513-43e8-98bb-b81e8d7c65e3" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.73:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.316206 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-lqxpc" podUID="6e263552-c0f6-4f24-879f-79895cdbc953" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.61:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.316407 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-xfmvz" podUID="e1a1d497-2276-4248-9bca-1c7038430933" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.58:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.357920 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-xbk9b" podUID="5ccbb7b6-e489-4676-8faa-8a0306776a54" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.75:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.405067 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" podUID="d3b1d20e-d20c-40f9-9c2b-314aee2fe51e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.77:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.405210 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.406176 4712 patch_prober.go:28] interesting pod/route-controller-manager-7449c76d86-5ljsq container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" start-of-body= Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.406241 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" podUID="18f1f168-60eb-4666-9d2f-7455021a946c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.62:8443/healthz\": dial tcp 10.217.0.62:8443: connect: connection refused" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.476611 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-z9d9r" podUID="3bfc9890-11b6-4fcf-9458-08dce816b4b9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.78:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.535998 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-2n8cf" podUID="957cefd9-5116-40c3-aaf4-67ba58319ca1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.79:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.578545 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" podUID="b3222b74-686d-4b44-b521-33fb24c0b403" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.81:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.578597 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.578649 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.578736 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.661014 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.661063 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.662086 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.662126 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-kj9k8" podUID="1abbe42a-dbb1-4ec5-8318-451adc608b2b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.82:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.662144 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.662215 4712 patch_prober.go:28] interesting pod/downloads-7954f5f757-27wq6 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.32:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.662236 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-27wq6" podUID="48626025-5e2a-47c8-b317-bcbada105e87" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.32:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.717905 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="19b27a49-3b3b-434e-b8c7-133e4e120569" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.217:8080/livez\": context deadline exceeded" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.745038 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-jjb4n" podUID="70ad565b-dc4e-4f67-863a-fd29c88ad39d" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.83:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.745381 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podUID="16cf8838-73f4-4b47-a0a5-0258974c49db" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.745476 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 18:00:36 crc kubenswrapper[4712]: I0130 18:00:36.796371 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-l62x6" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.032094 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-f4h96" podUID="f0e6edc2-9ad5-44a9-8737-78cfd077f9b1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.90:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.071325 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.072985 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-smj59" podUID="19489158-a72e-4e6d-981a-879b596fb9b8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.85:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.073068 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2x2xt" podUID="d37f95a0-af87-4727-83a4-aa6334b0759e" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.119237 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-78v95" podUID="a1f37d35-d806-4c98-bdc5-85163d1b180c" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.89:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.119325 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-4l4j7" podUID="adbd0e89-e0e3-46eb-b2c5-4482cc71deae" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.86:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.119789 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.119834 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.119868 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.120395 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.120441 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.120775 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" podUID="6c041737-6e32-468d-aba7-469207eab526" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.87:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.121074 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.121111 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.121156 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.121194 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.122113 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-d574845cc-9l79n" podUID="5ad57c84-b9da-4613-92e6-0bfe23a14d69" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.46:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.122631 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"480b3b4925fcd16a800234c3a5c41abde54bd2a1d5feaf120f78deb8d4ceb84a"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.138024 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-rfmgz" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.149994 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" containerID="cri-o://480b3b4925fcd16a800234c3a5c41abde54bd2a1d5feaf120f78deb8d4ceb84a" gracePeriod=30 Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279058 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279128 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279186 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279192 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279291 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279368 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279383 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279408 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279420 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279562 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279651 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.279693 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.280127 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"a5b7c3b62998a91649a4ae0c03d3b15baf9f58d81c2d2c8b873de9cf81369dfb"} pod="openshift-ingress/router-default-5444994796-qncbs" containerMessage="Container router failed liveness probe, will be restarted" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.280171 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" containerID="cri-o://a5b7c3b62998a91649a4ae0c03d3b15baf9f58d81c2d2c8b873de9cf81369dfb" gracePeriod=10 Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.294764 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.295673 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"eed8fbb470f2bafaa86c95e930596c5285808de3cb65807bebf006a35990fc4b"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.295749 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" containerID="cri-o://eed8fbb470f2bafaa86c95e930596c5285808de3cb65807bebf006a35990fc4b" gracePeriod=30 Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361008 4712 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xq27f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361064 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" podUID="68eec877-dde8-4b0b-8e78-53a70af78240" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361119 4712 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xq27f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361135 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" podUID="68eec877-dde8-4b0b-8e78-53a70af78240" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361162 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361183 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361190 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361198 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361222 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.361366 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.362192 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"faec40c724c3a7c23c63e8dbe05174f9c5993d635a55664f8a413b878e622bde"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.362228 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" containerID="cri-o://faec40c724c3a7c23c63e8dbe05174f9c5993d635a55664f8a413b878e622bde" gracePeriod=30 Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.443077 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.443179 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.443103 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.443269 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.444317 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"d2f648970ab6b5218373eaa4eeaf1c04e0ee91c91fa9f9c7540682dfaaaa8a13"} pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.444379 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" containerID="cri-o://d2f648970ab6b5218373eaa4eeaf1c04e0ee91c91fa9f9c7540682dfaaaa8a13" gracePeriod=2 Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.492421 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-7pr55" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.788043 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" podUID="16cf8838-73f4-4b47-a0a5-0258974c49db" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.55:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.870094 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.870167 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.870232 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:37 crc kubenswrapper[4712]: I0130 18:00:37.870247 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.153212 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.153583 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.153937 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.158149 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"482cb071017dbe649c256712df62fd07cd771647136f39b5bb50893927b48ca2"} pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.158244 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" containerID="cri-o://482cb071017dbe649c256712df62fd07cd771647136f39b5bb50893927b48ca2" gracePeriod=30 Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.159079 4712 prober.go:107] "Probe failed" probeType="Startup" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.159881 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.159928 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.165667 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"b0a357bdb5618102c61d86540e5aa4d38e5f998ae01e8ba51e4b9415e0897e68"} pod="metallb-system/frr-k8s-j9bpz" containerMessage="Container controller failed liveness probe, will be restarted" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.166500 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" containerID="cri-o://b0a357bdb5618102c61d86540e5aa4d38e5f998ae01e8ba51e4b9415e0897e68" gracePeriod=2 Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.209098 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-kr8vp" podUID="923ca268-753b-4b59-8c12-9517f5708f65" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.50:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.209394 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-j9bpz" podUID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.209577 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.308312 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-kr8vp" podUID="923ca268-753b-4b59-8c12-9517f5708f65" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.50:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.322139 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.322204 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.362572 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.362625 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.631587 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" podUID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.47:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.747989 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" podUID="055ca335-cbe6-4ef8-af90-fb2d995a3187" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.831020 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-gmjr9" podUID="f5e77c2d-c85b-44c7-ae02-074b491daf83" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.831061 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-gmjr9" podUID="f5e77c2d-c85b-44c7-ae02-074b491daf83" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.831002 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-vkxrq" podUID="055ca335-cbe6-4ef8-af90-fb2d995a3187" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.49:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.875511 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerDied","Data":"d7c2847e6873da314843f10f5a1edc47d102f60f3f89eab53cd78ef02a17e642"} Jan 30 18:00:38 crc kubenswrapper[4712]: I0130 18:00:38.907950 4712 generic.go:334] "Generic (PLEG): container finished" podID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerID="d7c2847e6873da314843f10f5a1edc47d102f60f3f89eab53cd78ef02a17e642" exitCode=0 Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.196985 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.197284 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.542056 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.542109 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.542068 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.542191 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.918409 4712 generic.go:334] "Generic (PLEG): container finished" podID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerID="eed8fbb470f2bafaa86c95e930596c5285808de3cb65807bebf006a35990fc4b" exitCode=0 Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.918482 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" event={"ID":"16d2b99c-7fc4-4d10-8ebc-1e726485e354","Type":"ContainerDied","Data":"eed8fbb470f2bafaa86c95e930596c5285808de3cb65807bebf006a35990fc4b"} Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.920785 4712 generic.go:334] "Generic (PLEG): container finished" podID="48377da3-e59b-4d8e-96df-e71697486469" containerID="05f6854f90ffa10a27ff5351f9fa3c08a2daedb83745bb726fd7c092aaf91363" exitCode=0 Jan 30 18:00:39 crc kubenswrapper[4712]: I0130 18:00:39.920874 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" event={"ID":"48377da3-e59b-4d8e-96df-e71697486469","Type":"ContainerDied","Data":"05f6854f90ffa10a27ff5351f9fa3c08a2daedb83745bb726fd7c092aaf91363"} Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165210 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165281 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output="command timed out" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165223 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165202 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output="command timed out" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165354 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165419 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165430 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.165438 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.171175 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645"} pod="openshift-marketplace/redhat-operators-zg4sq" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.171232 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" containerID="cri-o://ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645" gracePeriod=30 Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.171609 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 30 18:00:40 crc kubenswrapper[4712]: E0130 18:00:40.189694 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 18:00:40 crc kubenswrapper[4712]: E0130 18:00:40.192227 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 18:00:40 crc kubenswrapper[4712]: E0130 18:00:40.193486 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645" cmd=["grpc_health_probe","-addr=:50051"] Jan 30 18:00:40 crc kubenswrapper[4712]: E0130 18:00:40.193521 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.288049 4712 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.288687 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.288780 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.290411 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.290558 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65" gracePeriod=30 Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.933648 4712 generic.go:334] "Generic (PLEG): container finished" podID="5fe7be15-f524-46c1-ba58-e2d8ccd001c0" containerID="d2f648970ab6b5218373eaa4eeaf1c04e0ee91c91fa9f9c7540682dfaaaa8a13" exitCode=0 Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.934254 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" event={"ID":"5fe7be15-f524-46c1-ba58-e2d8ccd001c0","Type":"ContainerDied","Data":"d2f648970ab6b5218373eaa4eeaf1c04e0ee91c91fa9f9c7540682dfaaaa8a13"} Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.935836 4712 generic.go:334] "Generic (PLEG): container finished" podID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerID="faec40c724c3a7c23c63e8dbe05174f9c5993d635a55664f8a413b878e622bde" exitCode=0 Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.935895 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" event={"ID":"fd5b1abd-3085-42f2-94a1-a9f06129017c","Type":"ContainerDied","Data":"faec40c724c3a7c23c63e8dbe05174f9c5993d635a55664f8a413b878e622bde"} Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.941700 4712 generic.go:334] "Generic (PLEG): container finished" podID="7d1e2433-a99b-4b29-8f58-e21a7745d1d9" containerID="b0a357bdb5618102c61d86540e5aa4d38e5f998ae01e8ba51e4b9415e0897e68" exitCode=0 Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.941781 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerDied","Data":"b0a357bdb5618102c61d86540e5aa4d38e5f998ae01e8ba51e4b9415e0897e68"} Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.943553 4712 generic.go:334] "Generic (PLEG): container finished" podID="d9fce980-8342-4614-8cfe-c8757df49d74" containerID="480b3b4925fcd16a800234c3a5c41abde54bd2a1d5feaf120f78deb8d4ceb84a" exitCode=0 Jan 30 18:00:40 crc kubenswrapper[4712]: I0130 18:00:40.943584 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" event={"ID":"d9fce980-8342-4614-8cfe-c8757df49d74","Type":"ContainerDied","Data":"480b3b4925fcd16a800234c3a5c41abde54bd2a1d5feaf120f78deb8d4ceb84a"} Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.065217 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-dnfsb" podUID="7fe1585c-9bff-482c-a2b9-ccbb10a11300" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:41 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:00:41 crc kubenswrapper[4712]: > Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.067468 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-bs7pg" podUID="eaba725b-6442-4a5b-adc9-16047823dc86" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:41 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:00:41 crc kubenswrapper[4712]: > Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.075914 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-dnfsb" podUID="7fe1585c-9bff-482c-a2b9-ccbb10a11300" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:41 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:00:41 crc kubenswrapper[4712]: > Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.081414 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-bs7pg" podUID="eaba725b-6442-4a5b-adc9-16047823dc86" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:41 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:00:41 crc kubenswrapper[4712]: > Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.157933 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.157996 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.157931 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.158693 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"ccdfd7238be80e868d33acaacf3ac1488f312ac3c32c73ccd616c1e6060ec781"} pod="openstack-operators/openstack-operator-index-x5k4p" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.158742 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" containerID="cri-o://ccdfd7238be80e868d33acaacf3ac1488f312ac3c32c73ccd616c1e6060ec781" gracePeriod=30 Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.168618 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.168753 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.702805 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" containerID="cri-o://3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da" gracePeriod=17 Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.954607 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" event={"ID":"16d2b99c-7fc4-4d10-8ebc-1e726485e354","Type":"ContainerStarted","Data":"4559284ed9e8cd99ec47ea857904d13349a7b281bcacebdea0d1a6d38e435f1a"} Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.955217 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.955424 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.955542 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.957571 4712 generic.go:334] "Generic (PLEG): container finished" podID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerID="ccdfd7238be80e868d33acaacf3ac1488f312ac3c32c73ccd616c1e6060ec781" exitCode=0 Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.957637 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5k4p" event={"ID":"8610a2e0-98ae-41e2-80a0-c66d693024a0","Type":"ContainerDied","Data":"ccdfd7238be80e868d33acaacf3ac1488f312ac3c32c73ccd616c1e6060ec781"} Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.960321 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" event={"ID":"fd5b1abd-3085-42f2-94a1-a9f06129017c","Type":"ContainerStarted","Data":"5e9940d0245aa7798dc606706d7f431cd5b879bca17a46b91aa68f0ade1cd03c"} Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.960479 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.960955 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.961233 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.966583 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-j9bpz" event={"ID":"7d1e2433-a99b-4b29-8f58-e21a7745d1d9","Type":"ContainerStarted","Data":"52d6b5d8821b970e5b4b7844eddd468fb7b1c6cd27609dae09a368c110a04250"} Jan 30 18:00:41 crc kubenswrapper[4712]: I0130 18:00:41.966844 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.013894 4712 generic.go:334] "Generic (PLEG): container finished" podID="36edfc17-99ca-4e05-bf92-d60315860caf" containerID="ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645" exitCode=0 Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.014150 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerDied","Data":"ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.024235 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" event={"ID":"d9fce980-8342-4614-8cfe-c8757df49d74","Type":"ContainerStarted","Data":"79c4f613609eb097659d90bfb53b943e1db551e100d3a807a368ed42f376d558"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.024637 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.025164 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.025292 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.027209 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" event={"ID":"48377da3-e59b-4d8e-96df-e71697486469","Type":"ContainerStarted","Data":"9eb2ee6fdbc86abd15062ddce26c53526d0a24a7dae9d16900394bba35a63815"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.027871 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.028415 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.028556 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.036468 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" event={"ID":"5fe7be15-f524-46c1-ba58-e2d8ccd001c0","Type":"ContainerStarted","Data":"f69c5d0ebba97fa8a8e222d57da8098071dde7f8beb0b738c5ab9f22a1cdfd64"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.036606 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.049378 4712 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65" exitCode=0 Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.049714 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c4750ebab3eaeb8b0c465d2257c417e68692c999f382e05630a3f317f3f9ea65"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.054676 4712 generic.go:334] "Generic (PLEG): container finished" podID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerID="482cb071017dbe649c256712df62fd07cd771647136f39b5bb50893927b48ca2" exitCode=0 Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.054721 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" event={"ID":"f757484a-48c2-4b6e-9a6b-1e01fe951ae5","Type":"ContainerDied","Data":"482cb071017dbe649c256712df62fd07cd771647136f39b5bb50893927b48ca2"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.054960 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" event={"ID":"f757484a-48c2-4b6e-9a6b-1e01fe951ae5","Type":"ContainerStarted","Data":"d7d0e290a104f5afd493db9a80be414c8e1a8f458a7768dbecbce690bc7e5ba9"} Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.055480 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.055696 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.055781 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.137499 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.541329 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.541659 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.541374 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.541713 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.541733 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.542476 4712 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-6lnp9 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.542483 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"7ac717060f77f42d57cd4c7d3e9817d7bb2a8cdc6f228a95cc0647d2f24b5238"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.542670 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.542560 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" podUID="a5836457-3db5-41ec-b036-057186d44de8" containerName="openshift-config-operator" containerID="cri-o://7ac717060f77f42d57cd4c7d3e9817d7bb2a8cdc6f228a95cc0647d2f24b5238" gracePeriod=30 Jan 30 18:00:42 crc kubenswrapper[4712]: I0130 18:00:42.641176 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" containerID="cri-o://70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a" gracePeriod=28 Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.067260 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerStarted","Data":"1b4931016246d937ce1561ca389dbd25c8729651d5d7b1dd7bf9f7190e2a994b"} Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.072536 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-x5k4p" event={"ID":"8610a2e0-98ae-41e2-80a0-c66d693024a0","Type":"ContainerStarted","Data":"5ed392c7c533e5d405b8e0df25f9fe9f73c4214749daa4fc90e5152e8bd6d397"} Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.078209 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-6lnp9_a5836457-3db5-41ec-b036-057186d44de8/openshift-config-operator/1.log" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.080993 4712 generic.go:334] "Generic (PLEG): container finished" podID="a5836457-3db5-41ec-b036-057186d44de8" containerID="7ac717060f77f42d57cd4c7d3e9817d7bb2a8cdc6f228a95cc0647d2f24b5238" exitCode=2 Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.081092 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerDied","Data":"7ac717060f77f42d57cd4c7d3e9817d7bb2a8cdc6f228a95cc0647d2f24b5238"} Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.081171 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" event={"ID":"a5836457-3db5-41ec-b036-057186d44de8","Type":"ContainerStarted","Data":"98e0a5974a38bd1afe837f34383fa49120f1915ff987643551650c849f6487f9"} Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.081595 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.084563 4712 scope.go:117] "RemoveContainer" containerID="2e80d1cd02950c7d480bad14a1a609a4d2ac4caf1c989f6682a73e80934209f5" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.085850 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerStarted","Data":"c9fc25235ad95c34f1f1581bca4ad05c79cee224b253644219c044abde6f57ae"} Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.086318 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.086360 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.087001 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.087035 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.087098 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" start-of-body= Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.087122 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": dial tcp 10.217.0.30:5443: connect: connection refused" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.087361 4712 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-8m9br container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.087385 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" podUID="fd5b1abd-3085-42f2-94a1-a9f06129017c" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.088433 4712 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-swvjp container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" start-of-body= Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.088480 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" podUID="16d2b99c-7fc4-4d10-8ebc-1e726485e354" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.33:8443/healthz\": dial tcp 10.217.0.33:8443: connect: connection refused" Jan 30 18:00:43 crc kubenswrapper[4712]: I0130 18:00:43.191675 4712 trace.go:236] Trace[1162532483]: "Calculate volume metrics of registry-storage for pod openshift-image-registry/image-registry-66df7c8f76-fszw7" (30-Jan-2026 18:00:41.722) (total time: 1466ms): Jan 30 18:00:43 crc kubenswrapper[4712]: Trace[1162532483]: [1.466667551s] [1.466667551s] END Jan 30 18:00:44 crc kubenswrapper[4712]: I0130 18:00:44.100388 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"75293a1cf41992e612309a618ffcbbda153b211d4008f9d97d2efc666a8d9e56"} Jan 30 18:00:44 crc kubenswrapper[4712]: I0130 18:00:44.100692 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 18:00:44 crc kubenswrapper[4712]: I0130 18:00:44.115334 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-7777fb866f-6lnp9_a5836457-3db5-41ec-b036-057186d44de8/openshift-config-operator/1.log" Jan 30 18:00:44 crc kubenswrapper[4712]: I0130 18:00:44.583867 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:44 crc kubenswrapper[4712]: I0130 18:00:44.870620 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:00:44 crc kubenswrapper[4712]: I0130 18:00:44.870898 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:00:45 crc kubenswrapper[4712]: I0130 18:00:45.755175 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-5884d87984-t6bbn" Jan 30 18:00:45 crc kubenswrapper[4712]: I0130 18:00:45.798257 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 18:00:45 crc kubenswrapper[4712]: I0130 18:00:45.798297 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.082131 4712 patch_prober.go:28] interesting pod/controller-manager-7854896cc8-wc7q4 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" start-of-body= Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.082483 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" podUID="48377da3-e59b-4d8e-96df-e71697486469" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.63:8443/healthz\": dial tcp 10.217.0.63:8443: connect: connection refused" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.136312 4712 generic.go:334] "Generic (PLEG): container finished" podID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerID="70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a" exitCode=0 Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.136356 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3","Type":"ContainerDied","Data":"70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a"} Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.263970 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-swvjp" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.327332 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-8m9br" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.466498 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7449c76d86-5ljsq" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.786907 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.787891 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.786982 4712 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-k4mgv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" start-of-body= Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.788028 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" podUID="f757484a-48c2-4b6e-9a6b-1e01fe951ae5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.65:8080/healthz\": dial tcp 10.217.0.65:8080: connect: connection refused" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.800422 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 18:00:46 crc kubenswrapper[4712]: I0130 18:00:46.917194 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:46 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:46 crc kubenswrapper[4712]: > Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.006575 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.006638 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.006575 4712 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-dg9bq container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.006718 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" podUID="d9fce980-8342-4614-8cfe-c8757df49d74" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.30:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:47 crc kubenswrapper[4712]: E0130 18:00:47.134839 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 18:00:47 crc kubenswrapper[4712]: E0130 18:00:47.139634 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 18:00:47 crc kubenswrapper[4712]: E0130 18:00:47.143705 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 18:00:47 crc kubenswrapper[4712]: E0130 18:00:47.143815 4712 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.190704 4712 patch_prober.go:28] interesting pod/router-default-5444994796-qncbs container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 30 18:00:47 crc kubenswrapper[4712]: [+]has-synced ok Jan 30 18:00:47 crc kubenswrapper[4712]: [-]process-running failed: reason withheld Jan 30 18:00:47 crc kubenswrapper[4712]: healthz check failed Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.191044 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qncbs" podUID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.292440 4712 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-xq27f container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.292504 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" podUID="68eec877-dde8-4b0b-8e78-53a70af78240" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.27:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.292580 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.293635 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-xq27f" Jan 30 18:00:47 crc kubenswrapper[4712]: I0130 18:00:47.448183 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:48 crc kubenswrapper[4712]: I0130 18:00:48.174293 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"22b31f9d70504060260c699d11b96851d1ee0814fb413eb1537e2d821bb3e3fc"} Jan 30 18:00:48 crc kubenswrapper[4712]: I0130 18:00:48.178866 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qncbs_884f5245-fc6d-42b5-83c2-e3373788e91b/router/0.log" Jan 30 18:00:48 crc kubenswrapper[4712]: I0130 18:00:48.178914 4712 generic.go:334] "Generic (PLEG): container finished" podID="884f5245-fc6d-42b5-83c2-e3373788e91b" containerID="a5b7c3b62998a91649a4ae0c03d3b15baf9f58d81c2d2c8b873de9cf81369dfb" exitCode=137 Jan 30 18:00:48 crc kubenswrapper[4712]: I0130 18:00:48.179637 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qncbs" event={"ID":"884f5245-fc6d-42b5-83c2-e3373788e91b","Type":"ContainerDied","Data":"a5b7c3b62998a91649a4ae0c03d3b15baf9f58d81c2d2c8b873de9cf81369dfb"} Jan 30 18:00:48 crc kubenswrapper[4712]: I0130 18:00:48.281141 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openstack-operators/openstack-operator-index-x5k4p" podUID="8610a2e0-98ae-41e2-80a0-c66d693024a0" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:48 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:48 crc kubenswrapper[4712]: > Jan 30 18:00:48 crc kubenswrapper[4712]: I0130 18:00:48.551668 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-6lnp9" Jan 30 18:00:48 crc kubenswrapper[4712]: E0130 18:00:48.933814 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a is running failed: container process not found" containerID="70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 18:00:48 crc kubenswrapper[4712]: E0130 18:00:48.934479 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a is running failed: container process not found" containerID="70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 18:00:48 crc kubenswrapper[4712]: E0130 18:00:48.935209 4712 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a is running failed: container process not found" containerID="70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 30 18:00:48 crc kubenswrapper[4712]: E0130 18:00:48.935242 4712 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 70075a4b3de7920625ff31028d71e274c26740ac40037488429efaaac994792a is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" Jan 30 18:00:49 crc kubenswrapper[4712]: I0130 18:00:49.207383 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"e0e4667e-8702-43ae-b7b7-1aa930f9a3c3","Type":"ContainerStarted","Data":"635fd16e7c5a6fa99077aa824e0a51435a07691ebcee41c007c67f0ae4d0b623"} Jan 30 18:00:49 crc kubenswrapper[4712]: I0130 18:00:49.212061 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qncbs_884f5245-fc6d-42b5-83c2-e3373788e91b/router/0.log" Jan 30 18:00:49 crc kubenswrapper[4712]: I0130 18:00:49.212247 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qncbs" event={"ID":"884f5245-fc6d-42b5-83c2-e3373788e91b","Type":"ContainerStarted","Data":"8af92c7c225332fda85bcbbd798a5186545e5ba71bf347e5988ef911464f0dcb"} Jan 30 18:00:49 crc kubenswrapper[4712]: I0130 18:00:49.217238 4712 generic.go:334] "Generic (PLEG): container finished" podID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerID="3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da" exitCode=0 Jan 30 18:00:49 crc kubenswrapper[4712]: I0130 18:00:49.217290 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a12f0a95-1db0-4dd9-993c-1413c0fa10b0","Type":"ContainerDied","Data":"3d316f18629c5696446d3e76a4fc94419e782ea4a27f59f7fa064eba029285da"} Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.082286 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.123406 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.123482 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.123769 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-notification-agent" containerID="cri-o://f55f13c0d18cd219a7583bffee8540f878e6bdf852ba9f3550b2b5613ac4c69f" gracePeriod=30 Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.124430 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"48766373bca0fcf88feabac1d8e74a83dc6fa5e41bb6cf3b2dca237131c2c4bb"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.124504 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerName="cinder-scheduler" containerID="cri-o://48766373bca0fcf88feabac1d8e74a83dc6fa5e41bb6cf3b2dca237131c2c4bb" gracePeriod=30 Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.124641 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="proxy-httpd" containerID="cri-o://281a470f955e4312b3cfb290e1593f67506ddd67b7553ed2dbf3fd11ddfab11a" gracePeriod=30 Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.124684 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" containerID="cri-o://1b4931016246d937ce1561ca389dbd25c8729651d5d7b1dd7bf9f7190e2a994b" gracePeriod=30 Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.124903 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="sg-core" containerID="cri-o://eabf7bf98471e0c77ef14ff722d51aad209fee815776510511dbfcf2d5c658f0" gracePeriod=30 Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.195654 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.212604 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.231607 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"a12f0a95-1db0-4dd9-993c-1413c0fa10b0","Type":"ContainerStarted","Data":"fc854c359601cdb8786e686e528d1a4732794e1beeacc4479a67e976e9c9d8c8"} Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.232563 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 18:00:50 crc kubenswrapper[4712]: I0130 18:00:50.256030 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qncbs" Jan 30 18:00:51 crc kubenswrapper[4712]: I0130 18:00:51.245160 4712 generic.go:334] "Generic (PLEG): container finished" podID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerID="eabf7bf98471e0c77ef14ff722d51aad209fee815776510511dbfcf2d5c658f0" exitCode=2 Jan 30 18:00:51 crc kubenswrapper[4712]: I0130 18:00:51.246231 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerDied","Data":"eabf7bf98471e0c77ef14ff722d51aad209fee815776510511dbfcf2d5c658f0"} Jan 30 18:00:52 crc kubenswrapper[4712]: I0130 18:00:52.257842 4712 generic.go:334] "Generic (PLEG): container finished" podID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerID="281a470f955e4312b3cfb290e1593f67506ddd67b7553ed2dbf3fd11ddfab11a" exitCode=0 Jan 30 18:00:52 crc kubenswrapper[4712]: I0130 18:00:52.258326 4712 generic.go:334] "Generic (PLEG): container finished" podID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerID="f55f13c0d18cd219a7583bffee8540f878e6bdf852ba9f3550b2b5613ac4c69f" exitCode=0 Jan 30 18:00:52 crc kubenswrapper[4712]: I0130 18:00:52.257931 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerDied","Data":"281a470f955e4312b3cfb290e1593f67506ddd67b7553ed2dbf3fd11ddfab11a"} Jan 30 18:00:52 crc kubenswrapper[4712]: I0130 18:00:52.258439 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerDied","Data":"f55f13c0d18cd219a7583bffee8540f878e6bdf852ba9f3550b2b5613ac4c69f"} Jan 30 18:00:52 crc kubenswrapper[4712]: I0130 18:00:52.743665 4712 scope.go:117] "RemoveContainer" containerID="2a6e156fa9211e0d06ac89346f88b31d9df99adbdc9fe859db6c85e1c1eeb744" Jan 30 18:00:55 crc kubenswrapper[4712]: I0130 18:00:55.923353 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:00:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:00:55 crc kubenswrapper[4712]: > Jan 30 18:00:56 crc kubenswrapper[4712]: I0130 18:00:56.078228 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 18:00:56 crc kubenswrapper[4712]: I0130 18:00:56.149820 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7854896cc8-wc7q4" Jan 30 18:00:56 crc kubenswrapper[4712]: I0130 18:00:56.192177 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-dg9bq" Jan 30 18:00:56 crc kubenswrapper[4712]: I0130 18:00:56.221875 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-x5k4p" Jan 30 18:00:56 crc kubenswrapper[4712]: I0130 18:00:56.356074 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-58dccfbb96-pxb54" Jan 30 18:00:56 crc kubenswrapper[4712]: I0130 18:00:56.893990 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-k4mgv" Jan 30 18:00:57 crc kubenswrapper[4712]: I0130 18:00:57.009612 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-j9bpz" Jan 30 18:00:57 crc kubenswrapper[4712]: I0130 18:00:57.119780 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 30 18:00:57 crc kubenswrapper[4712]: I0130 18:00:57.119835 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 30 18:00:57 crc kubenswrapper[4712]: I0130 18:00:57.311699 4712 generic.go:334] "Generic (PLEG): container finished" podID="6e0d9187-34f3-4d93-a189-264ff4cc933d" containerID="48766373bca0fcf88feabac1d8e74a83dc6fa5e41bb6cf3b2dca237131c2c4bb" exitCode=0 Jan 30 18:00:57 crc kubenswrapper[4712]: I0130 18:00:57.311780 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e0d9187-34f3-4d93-a189-264ff4cc933d","Type":"ContainerDied","Data":"48766373bca0fcf88feabac1d8e74a83dc6fa5e41bb6cf3b2dca237131c2c4bb"} Jan 30 18:00:57 crc kubenswrapper[4712]: I0130 18:00:57.416214 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 30 18:00:58 crc kubenswrapper[4712]: I0130 18:00:58.110363 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 30 18:00:58 crc kubenswrapper[4712]: I0130 18:00:58.919814 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 30 18:00:58 crc kubenswrapper[4712]: I0130 18:00:58.920121 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 30 18:00:59 crc kubenswrapper[4712]: I0130 18:00:59.098327 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 30 18:00:59 crc kubenswrapper[4712]: I0130 18:00:59.483733 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.297002 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496601-zbw5k"] Jan 30 18:01:00 crc kubenswrapper[4712]: E0130 18:01:00.304691 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad51586a-58c7-4e2e-8098-9e58e9559c5c" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.305286 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad51586a-58c7-4e2e-8098-9e58e9559c5c" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.305610 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad51586a-58c7-4e2e-8098-9e58e9559c5c" containerName="collect-profiles" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.330259 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.481210 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dg4\" (UniqueName: \"kubernetes.io/projected/b95a6570-ac24-45a6-92c0-41f38a9d71da-kube-api-access-74dg4\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.481691 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-config-data\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.481768 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-combined-ca-bundle\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.481912 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-fernet-keys\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.505815 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496601-zbw5k"] Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.584246 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dg4\" (UniqueName: \"kubernetes.io/projected/b95a6570-ac24-45a6-92c0-41f38a9d71da-kube-api-access-74dg4\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.584335 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-config-data\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.584378 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-combined-ca-bundle\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.584402 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-fernet-keys\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.618846 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-fernet-keys\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.618941 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-config-data\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.635965 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-combined-ca-bundle\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.646855 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dg4\" (UniqueName: \"kubernetes.io/projected/b95a6570-ac24-45a6-92c0-41f38a9d71da-kube-api-access-74dg4\") pod \"keystone-cron-29496601-zbw5k\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:00 crc kubenswrapper[4712]: I0130 18:01:00.699156 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:02 crc kubenswrapper[4712]: I0130 18:01:02.175734 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.223:3000/\": dial tcp 10.217.0.223:3000: connect: connection refused" Jan 30 18:01:02 crc kubenswrapper[4712]: I0130 18:01:02.357558 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"6e0d9187-34f3-4d93-a189-264ff4cc933d","Type":"ContainerStarted","Data":"c03dddf2c2e288e5089b6ea3104b29a93df67fbf9e41cab0a54af8109aa1a11a"} Jan 30 18:01:04 crc kubenswrapper[4712]: I0130 18:01:04.089369 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 30 18:01:05 crc kubenswrapper[4712]: I0130 18:01:05.924487 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:05 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:01:05 crc kubenswrapper[4712]: > Jan 30 18:01:06 crc kubenswrapper[4712]: I0130 18:01:06.006346 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496601-zbw5k"] Jan 30 18:01:06 crc kubenswrapper[4712]: W0130 18:01:06.056478 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb95a6570_ac24_45a6_92c0_41f38a9d71da.slice/crio-c9170a3502697e741856cdf518259a84c2e396dc21086d88315efa8a5dcd9353 WatchSource:0}: Error finding container c9170a3502697e741856cdf518259a84c2e396dc21086d88315efa8a5dcd9353: Status 404 returned error can't find the container with id c9170a3502697e741856cdf518259a84c2e396dc21086d88315efa8a5dcd9353 Jan 30 18:01:06 crc kubenswrapper[4712]: I0130 18:01:06.391437 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-zbw5k" event={"ID":"b95a6570-ac24-45a6-92c0-41f38a9d71da","Type":"ContainerStarted","Data":"c9170a3502697e741856cdf518259a84c2e396dc21086d88315efa8a5dcd9353"} Jan 30 18:01:07 crc kubenswrapper[4712]: I0130 18:01:07.400792 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-zbw5k" event={"ID":"b95a6570-ac24-45a6-92c0-41f38a9d71da","Type":"ContainerStarted","Data":"134aeff3ec1bcb866d15777830f847451f6863c808ca6e599d43500786145bb0"} Jan 30 18:01:07 crc kubenswrapper[4712]: I0130 18:01:07.503505 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496601-zbw5k" podStartSLOduration=7.418016349 podStartE2EDuration="7.418016349s" podCreationTimestamp="2026-01-30 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:01:07.415490489 +0000 UTC m=+4004.322499958" watchObservedRunningTime="2026-01-30 18:01:07.418016349 +0000 UTC m=+4004.325025818" Jan 30 18:01:09 crc kubenswrapper[4712]: I0130 18:01:09.136882 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 30 18:01:09 crc kubenswrapper[4712]: I0130 18:01:09.370234 4712 trace.go:236] Trace[338599278]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/certified-operators-bs7pg" (30-Jan-2026 18:01:08.062) (total time: 1301ms): Jan 30 18:01:09 crc kubenswrapper[4712]: Trace[338599278]: [1.301097518s] [1.301097518s] END Jan 30 18:01:13 crc kubenswrapper[4712]: I0130 18:01:13.448736 4712 generic.go:334] "Generic (PLEG): container finished" podID="b95a6570-ac24-45a6-92c0-41f38a9d71da" containerID="134aeff3ec1bcb866d15777830f847451f6863c808ca6e599d43500786145bb0" exitCode=0 Jan 30 18:01:13 crc kubenswrapper[4712]: I0130 18:01:13.448866 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-zbw5k" event={"ID":"b95a6570-ac24-45a6-92c0-41f38a9d71da","Type":"ContainerDied","Data":"134aeff3ec1bcb866d15777830f847451f6863c808ca6e599d43500786145bb0"} Jan 30 18:01:14 crc kubenswrapper[4712]: I0130 18:01:14.974642 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.055160 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dg4\" (UniqueName: \"kubernetes.io/projected/b95a6570-ac24-45a6-92c0-41f38a9d71da-kube-api-access-74dg4\") pod \"b95a6570-ac24-45a6-92c0-41f38a9d71da\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.055360 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-combined-ca-bundle\") pod \"b95a6570-ac24-45a6-92c0-41f38a9d71da\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.055447 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-fernet-keys\") pod \"b95a6570-ac24-45a6-92c0-41f38a9d71da\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.055492 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-config-data\") pod \"b95a6570-ac24-45a6-92c0-41f38a9d71da\" (UID: \"b95a6570-ac24-45a6-92c0-41f38a9d71da\") " Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.077233 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b95a6570-ac24-45a6-92c0-41f38a9d71da-kube-api-access-74dg4" (OuterVolumeSpecName: "kube-api-access-74dg4") pod "b95a6570-ac24-45a6-92c0-41f38a9d71da" (UID: "b95a6570-ac24-45a6-92c0-41f38a9d71da"). InnerVolumeSpecName "kube-api-access-74dg4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.078965 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b95a6570-ac24-45a6-92c0-41f38a9d71da" (UID: "b95a6570-ac24-45a6-92c0-41f38a9d71da"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.110842 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b95a6570-ac24-45a6-92c0-41f38a9d71da" (UID: "b95a6570-ac24-45a6-92c0-41f38a9d71da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.135044 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-config-data" (OuterVolumeSpecName: "config-data") pod "b95a6570-ac24-45a6-92c0-41f38a9d71da" (UID: "b95a6570-ac24-45a6-92c0-41f38a9d71da"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.158414 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.158454 4712 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.158466 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b95a6570-ac24-45a6-92c0-41f38a9d71da-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.158477 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74dg4\" (UniqueName: \"kubernetes.io/projected/b95a6570-ac24-45a6-92c0-41f38a9d71da-kube-api-access-74dg4\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.466654 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496601-zbw5k" event={"ID":"b95a6570-ac24-45a6-92c0-41f38a9d71da","Type":"ContainerDied","Data":"c9170a3502697e741856cdf518259a84c2e396dc21086d88315efa8a5dcd9353"} Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.466856 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496601-zbw5k" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.467248 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9170a3502697e741856cdf518259a84c2e396dc21086d88315efa8a5dcd9353" Jan 30 18:01:15 crc kubenswrapper[4712]: I0130 18:01:15.922874 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:15 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:01:15 crc kubenswrapper[4712]: > Jan 30 18:01:19 crc kubenswrapper[4712]: I0130 18:01:19.850512 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-bs7pg" podUID="eaba725b-6442-4a5b-adc9-16047823dc86" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:19 crc kubenswrapper[4712]: timeout: health rpc did not complete within 1s Jan 30 18:01:19 crc kubenswrapper[4712]: > Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.555643 4712 generic.go:334] "Generic (PLEG): container finished" podID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerID="1b4931016246d937ce1561ca389dbd25c8729651d5d7b1dd7bf9f7190e2a994b" exitCode=137 Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.555728 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerDied","Data":"1b4931016246d937ce1561ca389dbd25c8729651d5d7b1dd7bf9f7190e2a994b"} Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.556187 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"776ccbe0-fd71-4c0d-877e-f0178e4c1262","Type":"ContainerDied","Data":"70a7c822f558fc1e4c0d67cad600c35a67fdbeba5f78d3d89ab7684face1ed99"} Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.556201 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70a7c822f558fc1e4c0d67cad600c35a67fdbeba5f78d3d89ab7684face1ed99" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.556218 4712 scope.go:117] "RemoveContainer" containerID="d7c2847e6873da314843f10f5a1edc47d102f60f3f89eab53cd78ef02a17e642" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.558405 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.692815 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9z5f\" (UniqueName: \"kubernetes.io/projected/776ccbe0-fd71-4c0d-877e-f0178e4c1262-kube-api-access-p9z5f\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.692883 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-log-httpd\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.692916 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-sg-core-conf-yaml\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.692997 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-ceilometer-tls-certs\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.693044 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-combined-ca-bundle\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.693076 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-run-httpd\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.693102 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-scripts\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.693181 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-config-data\") pod \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\" (UID: \"776ccbe0-fd71-4c0d-877e-f0178e4c1262\") " Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.702628 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-scripts" (OuterVolumeSpecName: "scripts") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.721602 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.724301 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.736325 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/776ccbe0-fd71-4c0d-877e-f0178e4c1262-kube-api-access-p9z5f" (OuterVolumeSpecName: "kube-api-access-p9z5f") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "kube-api-access-p9z5f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.777043 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.791577 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.795765 4712 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.795909 4712 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-scripts\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.795995 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9z5f\" (UniqueName: \"kubernetes.io/projected/776ccbe0-fd71-4c0d-877e-f0178e4c1262-kube-api-access-p9z5f\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.796066 4712 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/776ccbe0-fd71-4c0d-877e-f0178e4c1262-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.796133 4712 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.796205 4712 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.845041 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.890265 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-config-data" (OuterVolumeSpecName: "config-data") pod "776ccbe0-fd71-4c0d-877e-f0178e4c1262" (UID: "776ccbe0-fd71-4c0d-877e-f0178e4c1262"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.899393 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:21 crc kubenswrapper[4712]: I0130 18:01:21.899489 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/776ccbe0-fd71-4c0d-877e-f0178e4c1262-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.570304 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.625205 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.642033 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.664813 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:01:22 crc kubenswrapper[4712]: E0130 18:01:22.665188 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665207 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: E0130 18:01:22.665217 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="sg-core" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665224 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="sg-core" Jan 30 18:01:22 crc kubenswrapper[4712]: E0130 18:01:22.665239 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b95a6570-ac24-45a6-92c0-41f38a9d71da" containerName="keystone-cron" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665245 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b95a6570-ac24-45a6-92c0-41f38a9d71da" containerName="keystone-cron" Jan 30 18:01:22 crc kubenswrapper[4712]: E0130 18:01:22.665253 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="proxy-httpd" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665258 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="proxy-httpd" Jan 30 18:01:22 crc kubenswrapper[4712]: E0130 18:01:22.665279 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-notification-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665285 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-notification-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665460 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-notification-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665474 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="sg-core" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665489 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665500 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b95a6570-ac24-45a6-92c0-41f38a9d71da" containerName="keystone-cron" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665510 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665521 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="proxy-httpd" Jan 30 18:01:22 crc kubenswrapper[4712]: E0130 18:01:22.665703 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.665715 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" containerName="ceilometer-central-agent" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.667088 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.673351 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.673812 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.679656 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.686479 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712109 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712336 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-log-httpd\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712468 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712563 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-scripts\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712644 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-config-data\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712735 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7nzv\" (UniqueName: \"kubernetes.io/projected/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-kube-api-access-z7nzv\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712819 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.712910 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-run-httpd\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815096 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815147 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-scripts\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815171 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-config-data\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815207 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7nzv\" (UniqueName: \"kubernetes.io/projected/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-kube-api-access-z7nzv\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815226 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815261 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-run-httpd\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815317 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:22 crc kubenswrapper[4712]: I0130 18:01:22.815345 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-log-httpd\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.090648 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-log-httpd\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.095823 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-run-httpd\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.105936 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-config-data\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.106462 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.107310 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-scripts\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.110565 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.112155 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.127319 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7nzv\" (UniqueName: \"kubernetes.io/projected/d28763e8-26ec-4ba2-b944-1c84c2b81bf0-kube-api-access-z7nzv\") pod \"ceilometer-0\" (UID: \"d28763e8-26ec-4ba2-b944-1c84c2b81bf0\") " pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.289281 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 30 18:01:23 crc kubenswrapper[4712]: I0130 18:01:23.830647 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="776ccbe0-fd71-4c0d-877e-f0178e4c1262" path="/var/lib/kubelet/pods/776ccbe0-fd71-4c0d-877e-f0178e4c1262/volumes" Jan 30 18:01:24 crc kubenswrapper[4712]: I0130 18:01:24.092722 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 30 18:01:24 crc kubenswrapper[4712]: W0130 18:01:24.117351 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd28763e8_26ec_4ba2_b944_1c84c2b81bf0.slice/crio-942cecd951cc5522a814ea078e459b02baff146dbb58b494c40bcf7d60ad685d WatchSource:0}: Error finding container 942cecd951cc5522a814ea078e459b02baff146dbb58b494c40bcf7d60ad685d: Status 404 returned error can't find the container with id 942cecd951cc5522a814ea078e459b02baff146dbb58b494c40bcf7d60ad685d Jan 30 18:01:24 crc kubenswrapper[4712]: I0130 18:01:24.587662 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d28763e8-26ec-4ba2-b944-1c84c2b81bf0","Type":"ContainerStarted","Data":"942cecd951cc5522a814ea078e459b02baff146dbb58b494c40bcf7d60ad685d"} Jan 30 18:01:25 crc kubenswrapper[4712]: I0130 18:01:25.921830 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:01:25 crc kubenswrapper[4712]: > Jan 30 18:01:26 crc kubenswrapper[4712]: I0130 18:01:26.604022 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d28763e8-26ec-4ba2-b944-1c84c2b81bf0","Type":"ContainerStarted","Data":"b99144acaa4f4f4500134697e3aa6cdec4481b21459258f31c947b7467fd36ce"} Jan 30 18:01:28 crc kubenswrapper[4712]: I0130 18:01:28.622622 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d28763e8-26ec-4ba2-b944-1c84c2b81bf0","Type":"ContainerStarted","Data":"74b9af8889ca4abf34dc8eca0ed571dc4738274609432146cb2dd5aa26983898"} Jan 30 18:01:29 crc kubenswrapper[4712]: I0130 18:01:29.634745 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d28763e8-26ec-4ba2-b944-1c84c2b81bf0","Type":"ContainerStarted","Data":"d2dc803f2291fed77047ec50e50c081613f93c5630ea34f4752202d1ff5e7640"} Jan 30 18:01:33 crc kubenswrapper[4712]: I0130 18:01:33.671702 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d28763e8-26ec-4ba2-b944-1c84c2b81bf0","Type":"ContainerStarted","Data":"fcad61363fe4fa48fae14aea00a2c3909e42cd9f8ee78cafabc29159537c14ba"} Jan 30 18:01:33 crc kubenswrapper[4712]: I0130 18:01:33.672158 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 30 18:01:34 crc kubenswrapper[4712]: I0130 18:01:34.427117 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 30 18:01:34 crc kubenswrapper[4712]: I0130 18:01:34.451245 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.339652717 podStartE2EDuration="12.451222628s" podCreationTimestamp="2026-01-30 18:01:22 +0000 UTC" firstStartedPulling="2026-01-30 18:01:24.1210987 +0000 UTC m=+4021.028108159" lastFinishedPulling="2026-01-30 18:01:33.232668601 +0000 UTC m=+4030.139678070" observedRunningTime="2026-01-30 18:01:33.69907252 +0000 UTC m=+4030.606081989" watchObservedRunningTime="2026-01-30 18:01:34.451222628 +0000 UTC m=+4031.358232107" Jan 30 18:01:35 crc kubenswrapper[4712]: I0130 18:01:35.921665 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:01:35 crc kubenswrapper[4712]: > Jan 30 18:01:45 crc kubenswrapper[4712]: I0130 18:01:45.932531 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:01:45 crc kubenswrapper[4712]: > Jan 30 18:01:52 crc kubenswrapper[4712]: I0130 18:01:52.886198 4712 scope.go:117] "RemoveContainer" containerID="eabf7bf98471e0c77ef14ff722d51aad209fee815776510511dbfcf2d5c658f0" Jan 30 18:01:52 crc kubenswrapper[4712]: I0130 18:01:52.955533 4712 scope.go:117] "RemoveContainer" containerID="281a470f955e4312b3cfb290e1593f67506ddd67b7553ed2dbf3fd11ddfab11a" Jan 30 18:01:53 crc kubenswrapper[4712]: I0130 18:01:53.002158 4712 scope.go:117] "RemoveContainer" containerID="f55f13c0d18cd219a7583bffee8540f878e6bdf852ba9f3550b2b5613ac4c69f" Jan 30 18:01:53 crc kubenswrapper[4712]: I0130 18:01:53.303504 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 30 18:01:55 crc kubenswrapper[4712]: I0130 18:01:55.915782 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:01:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:01:55 crc kubenswrapper[4712]: > Jan 30 18:02:05 crc kubenswrapper[4712]: I0130 18:02:05.925965 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:05 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:05 crc kubenswrapper[4712]: > Jan 30 18:02:15 crc kubenswrapper[4712]: I0130 18:02:15.926479 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:15 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:15 crc kubenswrapper[4712]: > Jan 30 18:02:15 crc kubenswrapper[4712]: I0130 18:02:15.927291 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:02:15 crc kubenswrapper[4712]: I0130 18:02:15.989688 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"c9fc25235ad95c34f1f1581bca4ad05c79cee224b253644219c044abde6f57ae"} pod="openshift-marketplace/redhat-operators-zg4sq" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 30 18:02:15 crc kubenswrapper[4712]: I0130 18:02:15.990273 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" containerID="cri-o://c9fc25235ad95c34f1f1581bca4ad05c79cee224b253644219c044abde6f57ae" gracePeriod=30 Jan 30 18:02:19 crc kubenswrapper[4712]: I0130 18:02:19.618803 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:02:20 crc kubenswrapper[4712]: I0130 18:02:20.211373 4712 generic.go:334] "Generic (PLEG): container finished" podID="36edfc17-99ca-4e05-bf92-d60315860caf" containerID="c9fc25235ad95c34f1f1581bca4ad05c79cee224b253644219c044abde6f57ae" exitCode=0 Jan 30 18:02:20 crc kubenswrapper[4712]: I0130 18:02:20.211447 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerDied","Data":"c9fc25235ad95c34f1f1581bca4ad05c79cee224b253644219c044abde6f57ae"} Jan 30 18:02:20 crc kubenswrapper[4712]: I0130 18:02:20.211727 4712 scope.go:117] "RemoveContainer" containerID="ed780214005aad39bb8ba6a29a0b2707af45faf688fcde1b78c2a7be95a0d645" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.222252 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zg4sq" event={"ID":"36edfc17-99ca-4e05-bf92-d60315860caf","Type":"ContainerStarted","Data":"f61a177955e9dc63896025d067209116852c027ac869f084fa33f544af5a2969"} Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.380857 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wkmdv"] Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.392047 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.402211 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkmdv"] Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.523560 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-catalog-content\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.523766 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszh9\" (UniqueName: \"kubernetes.io/projected/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-kube-api-access-vszh9\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.523933 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-utilities\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.625051 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-utilities\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.625343 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-catalog-content\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.625509 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vszh9\" (UniqueName: \"kubernetes.io/projected/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-kube-api-access-vszh9\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.631934 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-catalog-content\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.632106 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-utilities\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.661389 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vszh9\" (UniqueName: \"kubernetes.io/projected/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-kube-api-access-vszh9\") pod \"redhat-operators-wkmdv\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:21 crc kubenswrapper[4712]: I0130 18:02:21.713472 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:23 crc kubenswrapper[4712]: I0130 18:02:23.281846 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wkmdv"] Jan 30 18:02:24 crc kubenswrapper[4712]: I0130 18:02:24.291668 4712 generic.go:334] "Generic (PLEG): container finished" podID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerID="5bfc8a00f14f7b6fc71907a9bdd80736117dd63441d66eee7b21f0d64020c886" exitCode=0 Jan 30 18:02:24 crc kubenswrapper[4712]: I0130 18:02:24.291921 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerDied","Data":"5bfc8a00f14f7b6fc71907a9bdd80736117dd63441d66eee7b21f0d64020c886"} Jan 30 18:02:24 crc kubenswrapper[4712]: I0130 18:02:24.291947 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerStarted","Data":"0dd121533b55dc399c7c7d325e19b8e8adccd702d44d408d36d37719e36da024"} Jan 30 18:02:24 crc kubenswrapper[4712]: I0130 18:02:24.873826 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:02:24 crc kubenswrapper[4712]: I0130 18:02:24.873904 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:02:25 crc kubenswrapper[4712]: I0130 18:02:25.933737 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:25 crc kubenswrapper[4712]: > Jan 30 18:02:26 crc kubenswrapper[4712]: I0130 18:02:26.313786 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerStarted","Data":"af7f7b163fc99f61a66866e345c2297cb131b66f737c46a2e0ee038f40bcb054"} Jan 30 18:02:33 crc kubenswrapper[4712]: I0130 18:02:33.384284 4712 generic.go:334] "Generic (PLEG): container finished" podID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerID="af7f7b163fc99f61a66866e345c2297cb131b66f737c46a2e0ee038f40bcb054" exitCode=0 Jan 30 18:02:33 crc kubenswrapper[4712]: I0130 18:02:33.384365 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerDied","Data":"af7f7b163fc99f61a66866e345c2297cb131b66f737c46a2e0ee038f40bcb054"} Jan 30 18:02:35 crc kubenswrapper[4712]: I0130 18:02:35.417110 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerStarted","Data":"042501d40e2b42b1678a0d9b4438ed9744fc8d66805d78bf16705772a16d281a"} Jan 30 18:02:35 crc kubenswrapper[4712]: I0130 18:02:35.440957 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wkmdv" podStartSLOduration=4.563119089 podStartE2EDuration="14.440936342s" podCreationTimestamp="2026-01-30 18:02:21 +0000 UTC" firstStartedPulling="2026-01-30 18:02:24.295531507 +0000 UTC m=+4081.202540976" lastFinishedPulling="2026-01-30 18:02:34.17334877 +0000 UTC m=+4091.080358229" observedRunningTime="2026-01-30 18:02:35.437303603 +0000 UTC m=+4092.344313102" watchObservedRunningTime="2026-01-30 18:02:35.440936342 +0000 UTC m=+4092.347945811" Jan 30 18:02:36 crc kubenswrapper[4712]: I0130 18:02:36.211233 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:36 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:36 crc kubenswrapper[4712]: > Jan 30 18:02:41 crc kubenswrapper[4712]: I0130 18:02:41.714102 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:41 crc kubenswrapper[4712]: I0130 18:02:41.714677 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:02:42 crc kubenswrapper[4712]: I0130 18:02:42.767578 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:42 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:42 crc kubenswrapper[4712]: > Jan 30 18:02:46 crc kubenswrapper[4712]: I0130 18:02:46.430294 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:46 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:46 crc kubenswrapper[4712]: > Jan 30 18:02:52 crc kubenswrapper[4712]: I0130 18:02:52.770181 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:52 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:52 crc kubenswrapper[4712]: > Jan 30 18:02:55 crc kubenswrapper[4712]: I0130 18:02:55.931935 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:02:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:02:55 crc kubenswrapper[4712]: > Jan 30 18:03:02 crc kubenswrapper[4712]: I0130 18:03:02.767230 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:02 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:02 crc kubenswrapper[4712]: > Jan 30 18:03:05 crc kubenswrapper[4712]: I0130 18:03:05.937152 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:05 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:05 crc kubenswrapper[4712]: > Jan 30 18:03:06 crc kubenswrapper[4712]: I0130 18:03:06.272118 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:03:06 crc kubenswrapper[4712]: I0130 18:03:06.273158 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:03:12 crc kubenswrapper[4712]: I0130 18:03:12.765527 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:12 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:12 crc kubenswrapper[4712]: > Jan 30 18:03:15 crc kubenswrapper[4712]: I0130 18:03:15.934144 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:15 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:15 crc kubenswrapper[4712]: > Jan 30 18:03:22 crc kubenswrapper[4712]: I0130 18:03:22.759337 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:22 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:22 crc kubenswrapper[4712]: > Jan 30 18:03:25 crc kubenswrapper[4712]: I0130 18:03:25.971372 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:25 crc kubenswrapper[4712]: > Jan 30 18:03:33 crc kubenswrapper[4712]: I0130 18:03:33.392651 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:33 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:33 crc kubenswrapper[4712]: > Jan 30 18:03:35 crc kubenswrapper[4712]: I0130 18:03:35.924403 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zg4sq" podUID="36edfc17-99ca-4e05-bf92-d60315860caf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:35 crc kubenswrapper[4712]: > Jan 30 18:03:36 crc kubenswrapper[4712]: I0130 18:03:36.271991 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:03:36 crc kubenswrapper[4712]: I0130 18:03:36.272129 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:03:42 crc kubenswrapper[4712]: I0130 18:03:42.759590 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" probeResult="failure" output=< Jan 30 18:03:42 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:03:42 crc kubenswrapper[4712]: > Jan 30 18:03:44 crc kubenswrapper[4712]: I0130 18:03:44.948856 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:03:45 crc kubenswrapper[4712]: I0130 18:03:45.002569 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zg4sq" Jan 30 18:03:51 crc kubenswrapper[4712]: I0130 18:03:51.768800 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:03:51 crc kubenswrapper[4712]: I0130 18:03:51.828668 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:03:52 crc kubenswrapper[4712]: I0130 18:03:52.741636 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkmdv"] Jan 30 18:03:53 crc kubenswrapper[4712]: I0130 18:03:53.202153 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wkmdv" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" containerID="cri-o://042501d40e2b42b1678a0d9b4438ed9744fc8d66805d78bf16705772a16d281a" gracePeriod=2 Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.176433 4712 generic.go:334] "Generic (PLEG): container finished" podID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerID="042501d40e2b42b1678a0d9b4438ed9744fc8d66805d78bf16705772a16d281a" exitCode=0 Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.176512 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerDied","Data":"042501d40e2b42b1678a0d9b4438ed9744fc8d66805d78bf16705772a16d281a"} Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.612398 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.683680 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vszh9\" (UniqueName: \"kubernetes.io/projected/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-kube-api-access-vszh9\") pod \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.683746 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-catalog-content\") pod \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.683929 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-utilities\") pod \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\" (UID: \"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21\") " Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.692924 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-utilities" (OuterVolumeSpecName: "utilities") pod "c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" (UID: "c4ed8d96-e4f2-4bb6-b744-13638b5e6b21"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.736749 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-kube-api-access-vszh9" (OuterVolumeSpecName: "kube-api-access-vszh9") pod "c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" (UID: "c4ed8d96-e4f2-4bb6-b744-13638b5e6b21"). InnerVolumeSpecName "kube-api-access-vszh9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.785720 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:54 crc kubenswrapper[4712]: I0130 18:03:54.785752 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vszh9\" (UniqueName: \"kubernetes.io/projected/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-kube-api-access-vszh9\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.017480 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" (UID: "c4ed8d96-e4f2-4bb6-b744-13638b5e6b21"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.093638 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.186285 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wkmdv" event={"ID":"c4ed8d96-e4f2-4bb6-b744-13638b5e6b21","Type":"ContainerDied","Data":"0dd121533b55dc399c7c7d325e19b8e8adccd702d44d408d36d37719e36da024"} Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.186332 4712 scope.go:117] "RemoveContainer" containerID="042501d40e2b42b1678a0d9b4438ed9744fc8d66805d78bf16705772a16d281a" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.186331 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wkmdv" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.222315 4712 scope.go:117] "RemoveContainer" containerID="af7f7b163fc99f61a66866e345c2297cb131b66f737c46a2e0ee038f40bcb054" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.233557 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wkmdv"] Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.245646 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wkmdv"] Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.257013 4712 scope.go:117] "RemoveContainer" containerID="5bfc8a00f14f7b6fc71907a9bdd80736117dd63441d66eee7b21f0d64020c886" Jan 30 18:03:55 crc kubenswrapper[4712]: I0130 18:03:55.819340 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" path="/var/lib/kubelet/pods/c4ed8d96-e4f2-4bb6-b744-13638b5e6b21/volumes" Jan 30 18:04:06 crc kubenswrapper[4712]: I0130 18:04:06.272574 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:04:06 crc kubenswrapper[4712]: I0130 18:04:06.273546 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:04:06 crc kubenswrapper[4712]: I0130 18:04:06.273723 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:04:06 crc kubenswrapper[4712]: I0130 18:04:06.275479 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22b31f9d70504060260c699d11b96851d1ee0814fb413eb1537e2d821bb3e3fc"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:04:06 crc kubenswrapper[4712]: I0130 18:04:06.275686 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://22b31f9d70504060260c699d11b96851d1ee0814fb413eb1537e2d821bb3e3fc" gracePeriod=600 Jan 30 18:04:07 crc kubenswrapper[4712]: I0130 18:04:07.318906 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="22b31f9d70504060260c699d11b96851d1ee0814fb413eb1537e2d821bb3e3fc" exitCode=0 Jan 30 18:04:07 crc kubenswrapper[4712]: I0130 18:04:07.318973 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"22b31f9d70504060260c699d11b96851d1ee0814fb413eb1537e2d821bb3e3fc"} Jan 30 18:04:07 crc kubenswrapper[4712]: I0130 18:04:07.319259 4712 scope.go:117] "RemoveContainer" containerID="f960893331d2846f08481a57a0cdba5af49b5dd727b19f0e09fcdde7d00cb3df" Jan 30 18:04:08 crc kubenswrapper[4712]: I0130 18:04:08.327100 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c"} Jan 30 18:06:36 crc kubenswrapper[4712]: I0130 18:06:36.271390 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:06:36 crc kubenswrapper[4712]: I0130 18:06:36.272090 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:06:53 crc kubenswrapper[4712]: I0130 18:06:53.511214 4712 scope.go:117] "RemoveContainer" containerID="1b4931016246d937ce1561ca389dbd25c8729651d5d7b1dd7bf9f7190e2a994b" Jan 30 18:07:06 crc kubenswrapper[4712]: I0130 18:07:06.272041 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:07:06 crc kubenswrapper[4712]: I0130 18:07:06.272688 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:07:36 crc kubenswrapper[4712]: I0130 18:07:36.271157 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:07:36 crc kubenswrapper[4712]: I0130 18:07:36.271978 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:07:36 crc kubenswrapper[4712]: I0130 18:07:36.272045 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:07:36 crc kubenswrapper[4712]: I0130 18:07:36.273126 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:07:36 crc kubenswrapper[4712]: I0130 18:07:36.273199 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" gracePeriod=600 Jan 30 18:07:37 crc kubenswrapper[4712]: E0130 18:07:37.455045 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:07:37 crc kubenswrapper[4712]: I0130 18:07:37.688856 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" exitCode=0 Jan 30 18:07:37 crc kubenswrapper[4712]: I0130 18:07:37.688914 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c"} Jan 30 18:07:37 crc kubenswrapper[4712]: I0130 18:07:37.688977 4712 scope.go:117] "RemoveContainer" containerID="22b31f9d70504060260c699d11b96851d1ee0814fb413eb1537e2d821bb3e3fc" Jan 30 18:07:37 crc kubenswrapper[4712]: I0130 18:07:37.690395 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:07:37 crc kubenswrapper[4712]: E0130 18:07:37.691110 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:07:48 crc kubenswrapper[4712]: I0130 18:07:48.800376 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:07:48 crc kubenswrapper[4712]: E0130 18:07:48.801778 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:07:59 crc kubenswrapper[4712]: I0130 18:07:59.801383 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:07:59 crc kubenswrapper[4712]: E0130 18:07:59.802134 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:08:10 crc kubenswrapper[4712]: I0130 18:08:10.800111 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:08:10 crc kubenswrapper[4712]: E0130 18:08:10.801060 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:08:26 crc kubenswrapper[4712]: I0130 18:08:26.800521 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:08:26 crc kubenswrapper[4712]: E0130 18:08:26.801341 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:08:37 crc kubenswrapper[4712]: I0130 18:08:37.804037 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:08:37 crc kubenswrapper[4712]: E0130 18:08:37.805122 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:08:50 crc kubenswrapper[4712]: I0130 18:08:50.800015 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:08:50 crc kubenswrapper[4712]: E0130 18:08:50.800717 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:09:01 crc kubenswrapper[4712]: I0130 18:09:01.799939 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:09:01 crc kubenswrapper[4712]: E0130 18:09:01.802592 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:09:12 crc kubenswrapper[4712]: I0130 18:09:12.800409 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:09:12 crc kubenswrapper[4712]: E0130 18:09:12.801635 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:09:26 crc kubenswrapper[4712]: I0130 18:09:26.799680 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:09:26 crc kubenswrapper[4712]: E0130 18:09:26.801920 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:09:37 crc kubenswrapper[4712]: I0130 18:09:37.800652 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:09:37 crc kubenswrapper[4712]: E0130 18:09:37.802162 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:09:52 crc kubenswrapper[4712]: I0130 18:09:52.800710 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:09:52 crc kubenswrapper[4712]: E0130 18:09:52.801672 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.644284 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g628g"] Jan 30 18:10:03 crc kubenswrapper[4712]: E0130 18:10:03.701048 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="extract-utilities" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.701092 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="extract-utilities" Jan 30 18:10:03 crc kubenswrapper[4712]: E0130 18:10:03.701154 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="extract-content" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.701160 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="extract-content" Jan 30 18:10:03 crc kubenswrapper[4712]: E0130 18:10:03.701189 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.701197 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.702183 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4ed8d96-e4f2-4bb6-b744-13638b5e6b21" containerName="registry-server" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.706079 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g628g"] Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.706178 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.805607 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jx9l\" (UniqueName: \"kubernetes.io/projected/eb8543d0-adef-4838-80a4-0e9409025a63-kube-api-access-4jx9l\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.805822 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-utilities\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.806253 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-catalog-content\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.907997 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-utilities\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.908125 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-catalog-content\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.908248 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jx9l\" (UniqueName: \"kubernetes.io/projected/eb8543d0-adef-4838-80a4-0e9409025a63-kube-api-access-4jx9l\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.908913 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-catalog-content\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:03 crc kubenswrapper[4712]: I0130 18:10:03.909261 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-utilities\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:04 crc kubenswrapper[4712]: I0130 18:10:04.054822 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jx9l\" (UniqueName: \"kubernetes.io/projected/eb8543d0-adef-4838-80a4-0e9409025a63-kube-api-access-4jx9l\") pod \"community-operators-g628g\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:04 crc kubenswrapper[4712]: I0130 18:10:04.333306 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:05 crc kubenswrapper[4712]: I0130 18:10:05.143460 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g628g"] Jan 30 18:10:05 crc kubenswrapper[4712]: I0130 18:10:05.358121 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerStarted","Data":"87f651aeeab6ee9946b2b6c19ad12fd40eb83100863133ff18ccda03bb67f926"} Jan 30 18:10:06 crc kubenswrapper[4712]: I0130 18:10:06.368145 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerStarted","Data":"d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647"} Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.379740 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb8543d0-adef-4838-80a4-0e9409025a63" containerID="d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647" exitCode=0 Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.379839 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerDied","Data":"d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647"} Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.382043 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.515299 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-gqsn2"] Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.517638 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.532448 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqsn2"] Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.686613 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-catalog-content\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.686699 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l99b2\" (UniqueName: \"kubernetes.io/projected/571ea0a8-0198-45bb-b9af-aa3142d8a48a-kube-api-access-l99b2\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.687102 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-utilities\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.788876 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-catalog-content\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.789012 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l99b2\" (UniqueName: \"kubernetes.io/projected/571ea0a8-0198-45bb-b9af-aa3142d8a48a-kube-api-access-l99b2\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.789199 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-utilities\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.789271 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-catalog-content\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.789610 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-utilities\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.802034 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:10:07 crc kubenswrapper[4712]: E0130 18:10:07.802359 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.831696 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l99b2\" (UniqueName: \"kubernetes.io/projected/571ea0a8-0198-45bb-b9af-aa3142d8a48a-kube-api-access-l99b2\") pod \"redhat-marketplace-gqsn2\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:07 crc kubenswrapper[4712]: I0130 18:10:07.839884 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:08 crc kubenswrapper[4712]: I0130 18:10:08.353857 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqsn2"] Jan 30 18:10:08 crc kubenswrapper[4712]: W0130 18:10:08.360514 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod571ea0a8_0198_45bb_b9af_aa3142d8a48a.slice/crio-71f90cca3d193922d64979f43fb52fb9896eefe3d298518d94993cdc1768398d WatchSource:0}: Error finding container 71f90cca3d193922d64979f43fb52fb9896eefe3d298518d94993cdc1768398d: Status 404 returned error can't find the container with id 71f90cca3d193922d64979f43fb52fb9896eefe3d298518d94993cdc1768398d Jan 30 18:10:08 crc kubenswrapper[4712]: I0130 18:10:08.390885 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerStarted","Data":"71f90cca3d193922d64979f43fb52fb9896eefe3d298518d94993cdc1768398d"} Jan 30 18:10:09 crc kubenswrapper[4712]: I0130 18:10:09.401287 4712 generic.go:334] "Generic (PLEG): container finished" podID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerID="4b9eb2d0c6d93f96c7d9673fc6eced606f5f404f2b315c5f7452ae7a78c54f15" exitCode=0 Jan 30 18:10:09 crc kubenswrapper[4712]: I0130 18:10:09.401361 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerDied","Data":"4b9eb2d0c6d93f96c7d9673fc6eced606f5f404f2b315c5f7452ae7a78c54f15"} Jan 30 18:10:09 crc kubenswrapper[4712]: I0130 18:10:09.404610 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerStarted","Data":"7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947"} Jan 30 18:10:12 crc kubenswrapper[4712]: I0130 18:10:12.448527 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerStarted","Data":"c4e2eef893f8739fdbd32359d013a15011aeeb465eb001367fe854d89fec872b"} Jan 30 18:10:17 crc kubenswrapper[4712]: I0130 18:10:17.493219 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb8543d0-adef-4838-80a4-0e9409025a63" containerID="7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947" exitCode=0 Jan 30 18:10:17 crc kubenswrapper[4712]: I0130 18:10:17.493841 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerDied","Data":"7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947"} Jan 30 18:10:18 crc kubenswrapper[4712]: I0130 18:10:18.504746 4712 generic.go:334] "Generic (PLEG): container finished" podID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerID="c4e2eef893f8739fdbd32359d013a15011aeeb465eb001367fe854d89fec872b" exitCode=0 Jan 30 18:10:18 crc kubenswrapper[4712]: I0130 18:10:18.504831 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerDied","Data":"c4e2eef893f8739fdbd32359d013a15011aeeb465eb001367fe854d89fec872b"} Jan 30 18:10:19 crc kubenswrapper[4712]: I0130 18:10:19.515721 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerStarted","Data":"65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883"} Jan 30 18:10:19 crc kubenswrapper[4712]: I0130 18:10:19.543080 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g628g" podStartSLOduration=5.380661693 podStartE2EDuration="16.543061765s" podCreationTimestamp="2026-01-30 18:10:03 +0000 UTC" firstStartedPulling="2026-01-30 18:10:07.381786422 +0000 UTC m=+4544.288795891" lastFinishedPulling="2026-01-30 18:10:18.544186494 +0000 UTC m=+4555.451195963" observedRunningTime="2026-01-30 18:10:19.53500205 +0000 UTC m=+4556.442011519" watchObservedRunningTime="2026-01-30 18:10:19.543061765 +0000 UTC m=+4556.450071234" Jan 30 18:10:20 crc kubenswrapper[4712]: I0130 18:10:20.526148 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerStarted","Data":"0105d6c29c05ea0643ce49bbbe3dc06b9503a6b4cebbf9aadcb95cefaa0048a8"} Jan 30 18:10:20 crc kubenswrapper[4712]: I0130 18:10:20.553950 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-gqsn2" podStartSLOduration=3.261763794 podStartE2EDuration="13.553931577s" podCreationTimestamp="2026-01-30 18:10:07 +0000 UTC" firstStartedPulling="2026-01-30 18:10:09.402766708 +0000 UTC m=+4546.309776177" lastFinishedPulling="2026-01-30 18:10:19.694934491 +0000 UTC m=+4556.601943960" observedRunningTime="2026-01-30 18:10:20.550639377 +0000 UTC m=+4557.457648846" watchObservedRunningTime="2026-01-30 18:10:20.553931577 +0000 UTC m=+4557.460941056" Jan 30 18:10:22 crc kubenswrapper[4712]: I0130 18:10:22.800558 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:10:22 crc kubenswrapper[4712]: E0130 18:10:22.801112 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:10:23 crc kubenswrapper[4712]: I0130 18:10:23.549786 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v252w"] Jan 30 18:10:23 crc kubenswrapper[4712]: I0130 18:10:23.551936 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:23 crc kubenswrapper[4712]: I0130 18:10:23.574749 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v252w"] Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.017286 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx4p7\" (UniqueName: \"kubernetes.io/projected/34da1034-8f9a-4385-abcf-cfeb79c6460b-kube-api-access-nx4p7\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.017657 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-catalog-content\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.017729 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-utilities\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.119183 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx4p7\" (UniqueName: \"kubernetes.io/projected/34da1034-8f9a-4385-abcf-cfeb79c6460b-kube-api-access-nx4p7\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.119253 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-catalog-content\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.119333 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-utilities\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.119724 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-catalog-content\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.121753 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-utilities\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.149880 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx4p7\" (UniqueName: \"kubernetes.io/projected/34da1034-8f9a-4385-abcf-cfeb79c6460b-kube-api-access-nx4p7\") pod \"certified-operators-v252w\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.174518 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.334043 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:24 crc kubenswrapper[4712]: I0130 18:10:24.334443 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:25 crc kubenswrapper[4712]: I0130 18:10:25.380590 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g628g" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="registry-server" probeResult="failure" output=< Jan 30 18:10:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:10:25 crc kubenswrapper[4712]: > Jan 30 18:10:25 crc kubenswrapper[4712]: I0130 18:10:25.410634 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v252w"] Jan 30 18:10:25 crc kubenswrapper[4712]: I0130 18:10:25.584081 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerStarted","Data":"9fb4be1be83f0fb2b212a227975fc8cc4831f5e5a975ea0f64e4f46f8e8c9989"} Jan 30 18:10:26 crc kubenswrapper[4712]: I0130 18:10:26.595616 4712 generic.go:334] "Generic (PLEG): container finished" podID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerID="5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b" exitCode=0 Jan 30 18:10:26 crc kubenswrapper[4712]: I0130 18:10:26.595680 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerDied","Data":"5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b"} Jan 30 18:10:27 crc kubenswrapper[4712]: I0130 18:10:27.840859 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:27 crc kubenswrapper[4712]: I0130 18:10:27.842754 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:28 crc kubenswrapper[4712]: I0130 18:10:28.616059 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerStarted","Data":"16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7"} Jan 30 18:10:28 crc kubenswrapper[4712]: I0130 18:10:28.915831 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-gqsn2" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:10:28 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:10:28 crc kubenswrapper[4712]: > Jan 30 18:10:32 crc kubenswrapper[4712]: I0130 18:10:32.650748 4712 generic.go:334] "Generic (PLEG): container finished" podID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerID="16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7" exitCode=0 Jan 30 18:10:32 crc kubenswrapper[4712]: I0130 18:10:32.652325 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerDied","Data":"16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7"} Jan 30 18:10:34 crc kubenswrapper[4712]: I0130 18:10:34.685659 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerStarted","Data":"3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d"} Jan 30 18:10:34 crc kubenswrapper[4712]: I0130 18:10:34.719481 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v252w" podStartSLOduration=5.180231234 podStartE2EDuration="11.719458351s" podCreationTimestamp="2026-01-30 18:10:23 +0000 UTC" firstStartedPulling="2026-01-30 18:10:26.599040741 +0000 UTC m=+4563.506050220" lastFinishedPulling="2026-01-30 18:10:33.138267828 +0000 UTC m=+4570.045277337" observedRunningTime="2026-01-30 18:10:34.7107533 +0000 UTC m=+4571.617762759" watchObservedRunningTime="2026-01-30 18:10:34.719458351 +0000 UTC m=+4571.626467820" Jan 30 18:10:34 crc kubenswrapper[4712]: I0130 18:10:34.800966 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:10:34 crc kubenswrapper[4712]: E0130 18:10:34.801159 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:10:35 crc kubenswrapper[4712]: I0130 18:10:35.379754 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-g628g" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="registry-server" probeResult="failure" output=< Jan 30 18:10:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:10:35 crc kubenswrapper[4712]: > Jan 30 18:10:37 crc kubenswrapper[4712]: I0130 18:10:37.962177 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:38 crc kubenswrapper[4712]: I0130 18:10:38.497461 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:38 crc kubenswrapper[4712]: I0130 18:10:38.552413 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqsn2"] Jan 30 18:10:39 crc kubenswrapper[4712]: I0130 18:10:39.738886 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-gqsn2" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="registry-server" containerID="cri-o://0105d6c29c05ea0643ce49bbbe3dc06b9503a6b4cebbf9aadcb95cefaa0048a8" gracePeriod=2 Jan 30 18:10:40 crc kubenswrapper[4712]: I0130 18:10:40.752524 4712 generic.go:334] "Generic (PLEG): container finished" podID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerID="0105d6c29c05ea0643ce49bbbe3dc06b9503a6b4cebbf9aadcb95cefaa0048a8" exitCode=0 Jan 30 18:10:40 crc kubenswrapper[4712]: I0130 18:10:40.752580 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerDied","Data":"0105d6c29c05ea0643ce49bbbe3dc06b9503a6b4cebbf9aadcb95cefaa0048a8"} Jan 30 18:10:40 crc kubenswrapper[4712]: I0130 18:10:40.856741 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.000986 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-catalog-content\") pod \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.001070 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l99b2\" (UniqueName: \"kubernetes.io/projected/571ea0a8-0198-45bb-b9af-aa3142d8a48a-kube-api-access-l99b2\") pod \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.001190 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-utilities\") pod \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\" (UID: \"571ea0a8-0198-45bb-b9af-aa3142d8a48a\") " Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.001907 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-utilities" (OuterVolumeSpecName: "utilities") pod "571ea0a8-0198-45bb-b9af-aa3142d8a48a" (UID: "571ea0a8-0198-45bb-b9af-aa3142d8a48a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.019899 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "571ea0a8-0198-45bb-b9af-aa3142d8a48a" (UID: "571ea0a8-0198-45bb-b9af-aa3142d8a48a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.021206 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/571ea0a8-0198-45bb-b9af-aa3142d8a48a-kube-api-access-l99b2" (OuterVolumeSpecName: "kube-api-access-l99b2") pod "571ea0a8-0198-45bb-b9af-aa3142d8a48a" (UID: "571ea0a8-0198-45bb-b9af-aa3142d8a48a"). InnerVolumeSpecName "kube-api-access-l99b2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.103398 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.103606 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/571ea0a8-0198-45bb-b9af-aa3142d8a48a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.103664 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l99b2\" (UniqueName: \"kubernetes.io/projected/571ea0a8-0198-45bb-b9af-aa3142d8a48a-kube-api-access-l99b2\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.765490 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-gqsn2" event={"ID":"571ea0a8-0198-45bb-b9af-aa3142d8a48a","Type":"ContainerDied","Data":"71f90cca3d193922d64979f43fb52fb9896eefe3d298518d94993cdc1768398d"} Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.765556 4712 scope.go:117] "RemoveContainer" containerID="0105d6c29c05ea0643ce49bbbe3dc06b9503a6b4cebbf9aadcb95cefaa0048a8" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.765558 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-gqsn2" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.790633 4712 scope.go:117] "RemoveContainer" containerID="c4e2eef893f8739fdbd32359d013a15011aeeb465eb001367fe854d89fec872b" Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.811437 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqsn2"] Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.826153 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-gqsn2"] Jan 30 18:10:41 crc kubenswrapper[4712]: I0130 18:10:41.830641 4712 scope.go:117] "RemoveContainer" containerID="4b9eb2d0c6d93f96c7d9673fc6eced606f5f404f2b315c5f7452ae7a78c54f15" Jan 30 18:10:43 crc kubenswrapper[4712]: I0130 18:10:43.813783 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" path="/var/lib/kubelet/pods/571ea0a8-0198-45bb-b9af-aa3142d8a48a/volumes" Jan 30 18:10:44 crc kubenswrapper[4712]: I0130 18:10:44.175597 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:44 crc kubenswrapper[4712]: I0130 18:10:44.175854 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:44 crc kubenswrapper[4712]: I0130 18:10:44.380388 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:44 crc kubenswrapper[4712]: I0130 18:10:44.441069 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:45 crc kubenswrapper[4712]: I0130 18:10:45.242944 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v252w" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="registry-server" probeResult="failure" output=< Jan 30 18:10:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:10:45 crc kubenswrapper[4712]: > Jan 30 18:10:45 crc kubenswrapper[4712]: I0130 18:10:45.614432 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g628g"] Jan 30 18:10:45 crc kubenswrapper[4712]: I0130 18:10:45.799962 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g628g" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="registry-server" containerID="cri-o://65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883" gracePeriod=2 Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.669410 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.810747 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb8543d0-adef-4838-80a4-0e9409025a63" containerID="65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883" exitCode=0 Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.810847 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g628g" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.810843 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerDied","Data":"65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883"} Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.811685 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g628g" event={"ID":"eb8543d0-adef-4838-80a4-0e9409025a63","Type":"ContainerDied","Data":"87f651aeeab6ee9946b2b6c19ad12fd40eb83100863133ff18ccda03bb67f926"} Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.812027 4712 scope.go:117] "RemoveContainer" containerID="65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.812416 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-utilities\") pod \"eb8543d0-adef-4838-80a4-0e9409025a63\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.812513 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jx9l\" (UniqueName: \"kubernetes.io/projected/eb8543d0-adef-4838-80a4-0e9409025a63-kube-api-access-4jx9l\") pod \"eb8543d0-adef-4838-80a4-0e9409025a63\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.812690 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-catalog-content\") pod \"eb8543d0-adef-4838-80a4-0e9409025a63\" (UID: \"eb8543d0-adef-4838-80a4-0e9409025a63\") " Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.813273 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-utilities" (OuterVolumeSpecName: "utilities") pod "eb8543d0-adef-4838-80a4-0e9409025a63" (UID: "eb8543d0-adef-4838-80a4-0e9409025a63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.833437 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8543d0-adef-4838-80a4-0e9409025a63-kube-api-access-4jx9l" (OuterVolumeSpecName: "kube-api-access-4jx9l") pod "eb8543d0-adef-4838-80a4-0e9409025a63" (UID: "eb8543d0-adef-4838-80a4-0e9409025a63"). InnerVolumeSpecName "kube-api-access-4jx9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.870724 4712 scope.go:117] "RemoveContainer" containerID="7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.877355 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb8543d0-adef-4838-80a4-0e9409025a63" (UID: "eb8543d0-adef-4838-80a4-0e9409025a63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.895061 4712 scope.go:117] "RemoveContainer" containerID="d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.917961 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.918000 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jx9l\" (UniqueName: \"kubernetes.io/projected/eb8543d0-adef-4838-80a4-0e9409025a63-kube-api-access-4jx9l\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.918020 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb8543d0-adef-4838-80a4-0e9409025a63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.941426 4712 scope.go:117] "RemoveContainer" containerID="65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883" Jan 30 18:10:46 crc kubenswrapper[4712]: E0130 18:10:46.944271 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883\": container with ID starting with 65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883 not found: ID does not exist" containerID="65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.944986 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883"} err="failed to get container status \"65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883\": rpc error: code = NotFound desc = could not find container \"65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883\": container with ID starting with 65236a8fe8754b088560a961fdbe88023d6960ea4006d5484efb20400b87b883 not found: ID does not exist" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.945031 4712 scope.go:117] "RemoveContainer" containerID="7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947" Jan 30 18:10:46 crc kubenswrapper[4712]: E0130 18:10:46.945362 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947\": container with ID starting with 7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947 not found: ID does not exist" containerID="7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.945391 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947"} err="failed to get container status \"7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947\": rpc error: code = NotFound desc = could not find container \"7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947\": container with ID starting with 7a660352d98ee3aa8f80b3f649e79ac6fcff2b67df3930d4c545d4432b1d3947 not found: ID does not exist" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.945410 4712 scope.go:117] "RemoveContainer" containerID="d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647" Jan 30 18:10:46 crc kubenswrapper[4712]: E0130 18:10:46.945659 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647\": container with ID starting with d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647 not found: ID does not exist" containerID="d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647" Jan 30 18:10:46 crc kubenswrapper[4712]: I0130 18:10:46.945697 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647"} err="failed to get container status \"d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647\": rpc error: code = NotFound desc = could not find container \"d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647\": container with ID starting with d964cc9e35a744780d999370548f5ac045df40e5ddd9c49210874b7f1f301647 not found: ID does not exist" Jan 30 18:10:47 crc kubenswrapper[4712]: I0130 18:10:47.144040 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g628g"] Jan 30 18:10:47 crc kubenswrapper[4712]: I0130 18:10:47.152293 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g628g"] Jan 30 18:10:47 crc kubenswrapper[4712]: I0130 18:10:47.816267 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" path="/var/lib/kubelet/pods/eb8543d0-adef-4838-80a4-0e9409025a63/volumes" Jan 30 18:10:49 crc kubenswrapper[4712]: I0130 18:10:49.799162 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:10:49 crc kubenswrapper[4712]: E0130 18:10:49.799786 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:10:54 crc kubenswrapper[4712]: I0130 18:10:54.243741 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:54 crc kubenswrapper[4712]: I0130 18:10:54.375186 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:54 crc kubenswrapper[4712]: I0130 18:10:54.497450 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v252w"] Jan 30 18:10:55 crc kubenswrapper[4712]: I0130 18:10:55.914620 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v252w" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="registry-server" containerID="cri-o://3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d" gracePeriod=2 Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.395741 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.508604 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx4p7\" (UniqueName: \"kubernetes.io/projected/34da1034-8f9a-4385-abcf-cfeb79c6460b-kube-api-access-nx4p7\") pod \"34da1034-8f9a-4385-abcf-cfeb79c6460b\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.508685 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-utilities\") pod \"34da1034-8f9a-4385-abcf-cfeb79c6460b\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.509387 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-utilities" (OuterVolumeSpecName: "utilities") pod "34da1034-8f9a-4385-abcf-cfeb79c6460b" (UID: "34da1034-8f9a-4385-abcf-cfeb79c6460b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.509549 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-catalog-content\") pod \"34da1034-8f9a-4385-abcf-cfeb79c6460b\" (UID: \"34da1034-8f9a-4385-abcf-cfeb79c6460b\") " Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.510062 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.514355 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34da1034-8f9a-4385-abcf-cfeb79c6460b-kube-api-access-nx4p7" (OuterVolumeSpecName: "kube-api-access-nx4p7") pod "34da1034-8f9a-4385-abcf-cfeb79c6460b" (UID: "34da1034-8f9a-4385-abcf-cfeb79c6460b"). InnerVolumeSpecName "kube-api-access-nx4p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.569601 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "34da1034-8f9a-4385-abcf-cfeb79c6460b" (UID: "34da1034-8f9a-4385-abcf-cfeb79c6460b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.611920 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx4p7\" (UniqueName: \"kubernetes.io/projected/34da1034-8f9a-4385-abcf-cfeb79c6460b-kube-api-access-nx4p7\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.611952 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/34da1034-8f9a-4385-abcf-cfeb79c6460b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.925240 4712 generic.go:334] "Generic (PLEG): container finished" podID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerID="3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d" exitCode=0 Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.925322 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerDied","Data":"3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d"} Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.925362 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v252w" event={"ID":"34da1034-8f9a-4385-abcf-cfeb79c6460b","Type":"ContainerDied","Data":"9fb4be1be83f0fb2b212a227975fc8cc4831f5e5a975ea0f64e4f46f8e8c9989"} Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.925395 4712 scope.go:117] "RemoveContainer" containerID="3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.926188 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v252w" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.949558 4712 scope.go:117] "RemoveContainer" containerID="16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.975377 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v252w"] Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.978345 4712 scope.go:117] "RemoveContainer" containerID="5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b" Jan 30 18:10:56 crc kubenswrapper[4712]: I0130 18:10:56.985024 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v252w"] Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.054416 4712 scope.go:117] "RemoveContainer" containerID="3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d" Jan 30 18:10:57 crc kubenswrapper[4712]: E0130 18:10:57.056263 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d\": container with ID starting with 3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d not found: ID does not exist" containerID="3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d" Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.056320 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d"} err="failed to get container status \"3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d\": rpc error: code = NotFound desc = could not find container \"3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d\": container with ID starting with 3e5219e17cc583b3e8d0c4d1066424beb73e1978bae747bcc77d02ba20095c9d not found: ID does not exist" Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.056344 4712 scope.go:117] "RemoveContainer" containerID="16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7" Jan 30 18:10:57 crc kubenswrapper[4712]: E0130 18:10:57.056724 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7\": container with ID starting with 16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7 not found: ID does not exist" containerID="16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7" Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.056749 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7"} err="failed to get container status \"16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7\": rpc error: code = NotFound desc = could not find container \"16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7\": container with ID starting with 16a6beabd8e8e6191ba5c3070b963e89f087e1c32b9745679c92fcb76644c5e7 not found: ID does not exist" Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.056768 4712 scope.go:117] "RemoveContainer" containerID="5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b" Jan 30 18:10:57 crc kubenswrapper[4712]: E0130 18:10:57.057110 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b\": container with ID starting with 5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b not found: ID does not exist" containerID="5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b" Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.057150 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b"} err="failed to get container status \"5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b\": rpc error: code = NotFound desc = could not find container \"5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b\": container with ID starting with 5451432a205e2612513eb0f4ce510f857b962828c3e43908ffbc207d738dfb9b not found: ID does not exist" Jan 30 18:10:57 crc kubenswrapper[4712]: I0130 18:10:57.814419 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" path="/var/lib/kubelet/pods/34da1034-8f9a-4385-abcf-cfeb79c6460b/volumes" Jan 30 18:11:01 crc kubenswrapper[4712]: I0130 18:11:01.800590 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:11:01 crc kubenswrapper[4712]: E0130 18:11:01.801524 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:11:14 crc kubenswrapper[4712]: I0130 18:11:14.799894 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:11:14 crc kubenswrapper[4712]: E0130 18:11:14.800669 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:11:29 crc kubenswrapper[4712]: I0130 18:11:29.799552 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:11:29 crc kubenswrapper[4712]: E0130 18:11:29.800365 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:11:41 crc kubenswrapper[4712]: I0130 18:11:41.799913 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:11:41 crc kubenswrapper[4712]: E0130 18:11:41.800894 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:11:55 crc kubenswrapper[4712]: I0130 18:11:55.802178 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:11:55 crc kubenswrapper[4712]: E0130 18:11:55.803241 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:12:06 crc kubenswrapper[4712]: I0130 18:12:06.799819 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:12:06 crc kubenswrapper[4712]: E0130 18:12:06.800629 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:12:21 crc kubenswrapper[4712]: I0130 18:12:21.800485 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:12:21 crc kubenswrapper[4712]: E0130 18:12:21.801187 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:12:36 crc kubenswrapper[4712]: I0130 18:12:36.800537 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:12:36 crc kubenswrapper[4712]: E0130 18:12:36.816153 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:12:50 crc kubenswrapper[4712]: I0130 18:12:50.800212 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:12:51 crc kubenswrapper[4712]: I0130 18:12:51.136059 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"a613ce570dedd7e82d331a9312c3a642ab2bdc059aaa85e6bd55148e1a6cbede"} Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.433293 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-7sbdz"] Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.434877 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="extract-utilities" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.434894 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="extract-utilities" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.434904 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="extract-content" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.434910 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="extract-content" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.434922 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="extract-content" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.434929 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="extract-content" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.434938 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.434943 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.434955 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="extract-content" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.434961 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="extract-content" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.434985 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.434991 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.435001 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.435007 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.435022 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="extract-utilities" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.435028 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="extract-utilities" Jan 30 18:13:16 crc kubenswrapper[4712]: E0130 18:13:16.435034 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="extract-utilities" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.435041 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="extract-utilities" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.435214 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="571ea0a8-0198-45bb-b9af-aa3142d8a48a" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.435223 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="34da1034-8f9a-4385-abcf-cfeb79c6460b" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.435235 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8543d0-adef-4838-80a4-0e9409025a63" containerName="registry-server" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.436541 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.453320 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7sbdz"] Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.492756 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wknmn\" (UniqueName: \"kubernetes.io/projected/21ad8671-fb57-43c4-bde6-68d8e38db90f-kube-api-access-wknmn\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.492908 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-utilities\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.493027 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-catalog-content\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.595506 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wknmn\" (UniqueName: \"kubernetes.io/projected/21ad8671-fb57-43c4-bde6-68d8e38db90f-kube-api-access-wknmn\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.595961 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-utilities\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.596128 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-catalog-content\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.596465 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-utilities\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.596643 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-catalog-content\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.629578 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wknmn\" (UniqueName: \"kubernetes.io/projected/21ad8671-fb57-43c4-bde6-68d8e38db90f-kube-api-access-wknmn\") pod \"redhat-operators-7sbdz\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:16 crc kubenswrapper[4712]: I0130 18:13:16.767227 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:17 crc kubenswrapper[4712]: I0130 18:13:17.440945 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-7sbdz"] Jan 30 18:13:17 crc kubenswrapper[4712]: W0130 18:13:17.458851 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21ad8671_fb57_43c4_bde6_68d8e38db90f.slice/crio-bab78184d79d0542a2eb53e75a3414cfe91b1b09a59b5b3356260be6b00bd2c8 WatchSource:0}: Error finding container bab78184d79d0542a2eb53e75a3414cfe91b1b09a59b5b3356260be6b00bd2c8: Status 404 returned error can't find the container with id bab78184d79d0542a2eb53e75a3414cfe91b1b09a59b5b3356260be6b00bd2c8 Jan 30 18:13:18 crc kubenswrapper[4712]: I0130 18:13:18.419721 4712 generic.go:334] "Generic (PLEG): container finished" podID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerID="08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92" exitCode=0 Jan 30 18:13:18 crc kubenswrapper[4712]: I0130 18:13:18.419840 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerDied","Data":"08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92"} Jan 30 18:13:18 crc kubenswrapper[4712]: I0130 18:13:18.420048 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerStarted","Data":"bab78184d79d0542a2eb53e75a3414cfe91b1b09a59b5b3356260be6b00bd2c8"} Jan 30 18:13:24 crc kubenswrapper[4712]: I0130 18:13:24.486411 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerStarted","Data":"809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f"} Jan 30 18:13:31 crc kubenswrapper[4712]: I0130 18:13:31.553313 4712 generic.go:334] "Generic (PLEG): container finished" podID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerID="809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f" exitCode=0 Jan 30 18:13:31 crc kubenswrapper[4712]: I0130 18:13:31.553492 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerDied","Data":"809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f"} Jan 30 18:13:32 crc kubenswrapper[4712]: I0130 18:13:32.567623 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerStarted","Data":"afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983"} Jan 30 18:13:32 crc kubenswrapper[4712]: I0130 18:13:32.597067 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-7sbdz" podStartSLOduration=2.92001407 podStartE2EDuration="16.597050079s" podCreationTimestamp="2026-01-30 18:13:16 +0000 UTC" firstStartedPulling="2026-01-30 18:13:18.421066132 +0000 UTC m=+4735.328075601" lastFinishedPulling="2026-01-30 18:13:32.098102101 +0000 UTC m=+4749.005111610" observedRunningTime="2026-01-30 18:13:32.591712049 +0000 UTC m=+4749.498721518" watchObservedRunningTime="2026-01-30 18:13:32.597050079 +0000 UTC m=+4749.504059548" Jan 30 18:13:36 crc kubenswrapper[4712]: I0130 18:13:36.771490 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:36 crc kubenswrapper[4712]: I0130 18:13:36.774146 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:13:37 crc kubenswrapper[4712]: I0130 18:13:37.845280 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:13:37 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:13:37 crc kubenswrapper[4712]: > Jan 30 18:13:47 crc kubenswrapper[4712]: I0130 18:13:47.819656 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:13:47 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:13:47 crc kubenswrapper[4712]: > Jan 30 18:13:48 crc kubenswrapper[4712]: I0130 18:13:48.153877 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:13:48 crc kubenswrapper[4712]: I0130 18:13:48.160461 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="a12f0a95-1db0-4dd9-993c-1413c0fa10b0" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:13:50 crc kubenswrapper[4712]: I0130 18:13:50.153374 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:13:50 crc kubenswrapper[4712]: I0130 18:13:50.153916 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="e0e4667e-8702-43ae-b7b7-1aa930f9a3c3" containerName="galera" probeResult="failure" output="command timed out" Jan 30 18:14:01 crc kubenswrapper[4712]: I0130 18:14:01.254951 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="d28763e8-26ec-4ba2-b944-1c84c2b81bf0" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 30 18:14:01 crc kubenswrapper[4712]: I0130 18:14:01.311417 4712 trace.go:236] Trace[1714234166]: "Calculate volume metrics of v4-0-config-system-service-ca for pod openshift-authentication/oauth-openshift-544b887855-ts8md" (30-Jan-2026 18:13:47.738) (total time: 13523ms): Jan 30 18:14:01 crc kubenswrapper[4712]: Trace[1714234166]: [13.523153325s] [13.523153325s] END Jan 30 18:14:01 crc kubenswrapper[4712]: I0130 18:14:01.377172 4712 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-r5sv7 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 30 18:14:01 crc kubenswrapper[4712]: I0130 18:14:01.395945 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-r5sv7" podUID="28bc8c3c-aa7e-4430-acf7-30ddf2ed9e24" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 30 18:14:01 crc kubenswrapper[4712]: I0130 18:14:01.560363 4712 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 12.544752955s: [/var/lib/containers/storage/overlay/704fda5991cf90f059ed3a32ddeecf476c1a971b7397bf755d642fa88e7cdb32/diff /var/log/pods/openstack_neutron-7f6ddf59f7-2n5p6_1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e/neutron-httpd/0.log]; will not log again for this container unless duration exceeds 2s Jan 30 18:14:02 crc kubenswrapper[4712]: I0130 18:14:02.899974 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:14:02 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:14:02 crc kubenswrapper[4712]: > Jan 30 18:14:07 crc kubenswrapper[4712]: I0130 18:14:07.827018 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:14:07 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:14:07 crc kubenswrapper[4712]: > Jan 30 18:14:17 crc kubenswrapper[4712]: I0130 18:14:17.839289 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:14:17 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:14:17 crc kubenswrapper[4712]: > Jan 30 18:14:27 crc kubenswrapper[4712]: I0130 18:14:27.829856 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:14:27 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:14:27 crc kubenswrapper[4712]: > Jan 30 18:14:36 crc kubenswrapper[4712]: I0130 18:14:36.861121 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:14:36 crc kubenswrapper[4712]: I0130 18:14:36.927324 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:14:37 crc kubenswrapper[4712]: I0130 18:14:37.116835 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7sbdz"] Jan 30 18:14:38 crc kubenswrapper[4712]: I0130 18:14:38.120449 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-7sbdz" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" containerID="cri-o://afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983" gracePeriod=2 Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.125207 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.133031 4712 generic.go:334] "Generic (PLEG): container finished" podID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerID="afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983" exitCode=0 Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.133088 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerDied","Data":"afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983"} Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.133119 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-7sbdz" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.133260 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-7sbdz" event={"ID":"21ad8671-fb57-43c4-bde6-68d8e38db90f","Type":"ContainerDied","Data":"bab78184d79d0542a2eb53e75a3414cfe91b1b09a59b5b3356260be6b00bd2c8"} Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.133930 4712 scope.go:117] "RemoveContainer" containerID="afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.172522 4712 scope.go:117] "RemoveContainer" containerID="809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.200108 4712 scope.go:117] "RemoveContainer" containerID="08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.208181 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wknmn\" (UniqueName: \"kubernetes.io/projected/21ad8671-fb57-43c4-bde6-68d8e38db90f-kube-api-access-wknmn\") pod \"21ad8671-fb57-43c4-bde6-68d8e38db90f\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.208239 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-catalog-content\") pod \"21ad8671-fb57-43c4-bde6-68d8e38db90f\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.208494 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-utilities\") pod \"21ad8671-fb57-43c4-bde6-68d8e38db90f\" (UID: \"21ad8671-fb57-43c4-bde6-68d8e38db90f\") " Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.209288 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-utilities" (OuterVolumeSpecName: "utilities") pod "21ad8671-fb57-43c4-bde6-68d8e38db90f" (UID: "21ad8671-fb57-43c4-bde6-68d8e38db90f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.209490 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.219543 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21ad8671-fb57-43c4-bde6-68d8e38db90f-kube-api-access-wknmn" (OuterVolumeSpecName: "kube-api-access-wknmn") pod "21ad8671-fb57-43c4-bde6-68d8e38db90f" (UID: "21ad8671-fb57-43c4-bde6-68d8e38db90f"). InnerVolumeSpecName "kube-api-access-wknmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.287242 4712 scope.go:117] "RemoveContainer" containerID="afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983" Jan 30 18:14:39 crc kubenswrapper[4712]: E0130 18:14:39.291731 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983\": container with ID starting with afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983 not found: ID does not exist" containerID="afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.292711 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983"} err="failed to get container status \"afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983\": rpc error: code = NotFound desc = could not find container \"afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983\": container with ID starting with afc95f867556c5d73d2aa49ab5753ad23fabf97355fcb4880775291ec9848983 not found: ID does not exist" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.292756 4712 scope.go:117] "RemoveContainer" containerID="809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f" Jan 30 18:14:39 crc kubenswrapper[4712]: E0130 18:14:39.293273 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f\": container with ID starting with 809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f not found: ID does not exist" containerID="809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.293313 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f"} err="failed to get container status \"809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f\": rpc error: code = NotFound desc = could not find container \"809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f\": container with ID starting with 809966743fa2fec8ba1344738ec48d586c42d853f9dbfe21b2e3b21bf3504a6f not found: ID does not exist" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.293339 4712 scope.go:117] "RemoveContainer" containerID="08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92" Jan 30 18:14:39 crc kubenswrapper[4712]: E0130 18:14:39.293878 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92\": container with ID starting with 08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92 not found: ID does not exist" containerID="08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.293906 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92"} err="failed to get container status \"08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92\": rpc error: code = NotFound desc = could not find container \"08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92\": container with ID starting with 08bc75f20e650d0b1397b83c5041ae3e9a6fdec99e05ad1591b35bec53843c92 not found: ID does not exist" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.311386 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wknmn\" (UniqueName: \"kubernetes.io/projected/21ad8671-fb57-43c4-bde6-68d8e38db90f-kube-api-access-wknmn\") on node \"crc\" DevicePath \"\"" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.390725 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "21ad8671-fb57-43c4-bde6-68d8e38db90f" (UID: "21ad8671-fb57-43c4-bde6-68d8e38db90f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.413561 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/21ad8671-fb57-43c4-bde6-68d8e38db90f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.476186 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-7sbdz"] Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.484913 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-7sbdz"] Jan 30 18:14:39 crc kubenswrapper[4712]: I0130 18:14:39.811503 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" path="/var/lib/kubelet/pods/21ad8671-fb57-43c4-bde6-68d8e38db90f/volumes" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.413171 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb"] Jan 30 18:15:00 crc kubenswrapper[4712]: E0130 18:15:00.414331 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.414349 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4712]: E0130 18:15:00.414376 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="extract-utilities" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.414385 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="extract-utilities" Jan 30 18:15:00 crc kubenswrapper[4712]: E0130 18:15:00.414400 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="extract-content" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.414407 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="extract-content" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.414629 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="21ad8671-fb57-43c4-bde6-68d8e38db90f" containerName="registry-server" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.415829 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.423809 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb"] Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.443875 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-secret-volume\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.443977 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kghld\" (UniqueName: \"kubernetes.io/projected/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-kube-api-access-kghld\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.444094 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-config-volume\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.454822 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.460466 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.546880 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-secret-volume\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.547080 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kghld\" (UniqueName: \"kubernetes.io/projected/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-kube-api-access-kghld\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.547283 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-config-volume\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.548471 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-config-volume\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.555570 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-secret-volume\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.573483 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kghld\" (UniqueName: \"kubernetes.io/projected/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-kube-api-access-kghld\") pod \"collect-profiles-29496615-94qjb\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:00 crc kubenswrapper[4712]: I0130 18:15:00.770692 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:01 crc kubenswrapper[4712]: I0130 18:15:01.569824 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb"] Jan 30 18:15:02 crc kubenswrapper[4712]: I0130 18:15:02.371227 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" event={"ID":"2bd24b2c-f1ed-45ed-a37f-d1c813be0529","Type":"ContainerStarted","Data":"3b63c289d37f5c252950c610141e43151a089f80f0567604448a51a360c5523f"} Jan 30 18:15:03 crc kubenswrapper[4712]: I0130 18:15:03.382691 4712 generic.go:334] "Generic (PLEG): container finished" podID="2bd24b2c-f1ed-45ed-a37f-d1c813be0529" containerID="813ba8517838b1246180816cafded484d13b6970804e12f70189d1bc488b7d8a" exitCode=0 Jan 30 18:15:03 crc kubenswrapper[4712]: I0130 18:15:03.382883 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" event={"ID":"2bd24b2c-f1ed-45ed-a37f-d1c813be0529","Type":"ContainerDied","Data":"813ba8517838b1246180816cafded484d13b6970804e12f70189d1bc488b7d8a"} Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.716747 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.752763 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-secret-volume\") pod \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.752836 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kghld\" (UniqueName: \"kubernetes.io/projected/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-kube-api-access-kghld\") pod \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.753009 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-config-volume\") pod \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\" (UID: \"2bd24b2c-f1ed-45ed-a37f-d1c813be0529\") " Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.753753 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bd24b2c-f1ed-45ed-a37f-d1c813be0529" (UID: "2bd24b2c-f1ed-45ed-a37f-d1c813be0529"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.754698 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.759935 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-kube-api-access-kghld" (OuterVolumeSpecName: "kube-api-access-kghld") pod "2bd24b2c-f1ed-45ed-a37f-d1c813be0529" (UID: "2bd24b2c-f1ed-45ed-a37f-d1c813be0529"). InnerVolumeSpecName "kube-api-access-kghld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.760008 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bd24b2c-f1ed-45ed-a37f-d1c813be0529" (UID: "2bd24b2c-f1ed-45ed-a37f-d1c813be0529"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.855585 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:04 crc kubenswrapper[4712]: I0130 18:15:04.855607 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kghld\" (UniqueName: \"kubernetes.io/projected/2bd24b2c-f1ed-45ed-a37f-d1c813be0529-kube-api-access-kghld\") on node \"crc\" DevicePath \"\"" Jan 30 18:15:05 crc kubenswrapper[4712]: I0130 18:15:05.398989 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" event={"ID":"2bd24b2c-f1ed-45ed-a37f-d1c813be0529","Type":"ContainerDied","Data":"3b63c289d37f5c252950c610141e43151a089f80f0567604448a51a360c5523f"} Jan 30 18:15:05 crc kubenswrapper[4712]: I0130 18:15:05.399071 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb" Jan 30 18:15:05 crc kubenswrapper[4712]: I0130 18:15:05.399021 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b63c289d37f5c252950c610141e43151a089f80f0567604448a51a360c5523f" Jan 30 18:15:05 crc kubenswrapper[4712]: I0130 18:15:05.819740 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59"] Jan 30 18:15:05 crc kubenswrapper[4712]: I0130 18:15:05.828062 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496570-qjd59"] Jan 30 18:15:06 crc kubenswrapper[4712]: I0130 18:15:06.270637 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:15:06 crc kubenswrapper[4712]: I0130 18:15:06.270709 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:15:07 crc kubenswrapper[4712]: I0130 18:15:07.820461 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdadb9b1-d191-4f65-980f-c8681e9981d4" path="/var/lib/kubelet/pods/fdadb9b1-d191-4f65-980f-c8681e9981d4/volumes" Jan 30 18:15:36 crc kubenswrapper[4712]: I0130 18:15:36.270904 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:15:36 crc kubenswrapper[4712]: I0130 18:15:36.271532 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:16:02 crc kubenswrapper[4712]: I0130 18:16:02.144917 4712 scope.go:117] "RemoveContainer" containerID="2b1534ef9dbd422de81819a030f5230f577f951672338f34c06d021d9afd453d" Jan 30 18:16:06 crc kubenswrapper[4712]: I0130 18:16:06.272163 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:16:06 crc kubenswrapper[4712]: I0130 18:16:06.272772 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:16:06 crc kubenswrapper[4712]: I0130 18:16:06.272851 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:16:06 crc kubenswrapper[4712]: I0130 18:16:06.273716 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a613ce570dedd7e82d331a9312c3a642ab2bdc059aaa85e6bd55148e1a6cbede"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:16:06 crc kubenswrapper[4712]: I0130 18:16:06.273826 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://a613ce570dedd7e82d331a9312c3a642ab2bdc059aaa85e6bd55148e1a6cbede" gracePeriod=600 Jan 30 18:16:07 crc kubenswrapper[4712]: I0130 18:16:07.084318 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="a613ce570dedd7e82d331a9312c3a642ab2bdc059aaa85e6bd55148e1a6cbede" exitCode=0 Jan 30 18:16:07 crc kubenswrapper[4712]: I0130 18:16:07.084645 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"a613ce570dedd7e82d331a9312c3a642ab2bdc059aaa85e6bd55148e1a6cbede"} Jan 30 18:16:07 crc kubenswrapper[4712]: I0130 18:16:07.085369 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4"} Jan 30 18:16:07 crc kubenswrapper[4712]: I0130 18:16:07.085467 4712 scope.go:117] "RemoveContainer" containerID="36685c63c851c1a3a115e3df54e26c25f9f2f445ab9f6ac1396828806b9f609c" Jan 30 18:18:06 crc kubenswrapper[4712]: I0130 18:18:06.270575 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:18:06 crc kubenswrapper[4712]: I0130 18:18:06.272130 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:18:36 crc kubenswrapper[4712]: I0130 18:18:36.271504 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:18:36 crc kubenswrapper[4712]: I0130 18:18:36.272134 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:19:06 crc kubenswrapper[4712]: I0130 18:19:06.271041 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:19:06 crc kubenswrapper[4712]: I0130 18:19:06.271852 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:19:06 crc kubenswrapper[4712]: I0130 18:19:06.271930 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:19:06 crc kubenswrapper[4712]: I0130 18:19:06.273128 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:19:06 crc kubenswrapper[4712]: I0130 18:19:06.273226 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" gracePeriod=600 Jan 30 18:19:06 crc kubenswrapper[4712]: E0130 18:19:06.419903 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:19:07 crc kubenswrapper[4712]: I0130 18:19:07.073065 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" exitCode=0 Jan 30 18:19:07 crc kubenswrapper[4712]: I0130 18:19:07.073133 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4"} Jan 30 18:19:07 crc kubenswrapper[4712]: I0130 18:19:07.073381 4712 scope.go:117] "RemoveContainer" containerID="a613ce570dedd7e82d331a9312c3a642ab2bdc059aaa85e6bd55148e1a6cbede" Jan 30 18:19:07 crc kubenswrapper[4712]: I0130 18:19:07.074041 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:19:07 crc kubenswrapper[4712]: E0130 18:19:07.074660 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:19:19 crc kubenswrapper[4712]: I0130 18:19:19.799386 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:19:19 crc kubenswrapper[4712]: E0130 18:19:19.800176 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:19:30 crc kubenswrapper[4712]: I0130 18:19:30.799950 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:19:30 crc kubenswrapper[4712]: E0130 18:19:30.801214 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:19:41 crc kubenswrapper[4712]: I0130 18:19:41.799392 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:19:41 crc kubenswrapper[4712]: E0130 18:19:41.800324 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:19:55 crc kubenswrapper[4712]: I0130 18:19:55.799521 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:19:55 crc kubenswrapper[4712]: E0130 18:19:55.801829 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:20:08 crc kubenswrapper[4712]: I0130 18:20:08.799250 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:20:08 crc kubenswrapper[4712]: E0130 18:20:08.800195 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:20:23 crc kubenswrapper[4712]: I0130 18:20:23.800057 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:20:23 crc kubenswrapper[4712]: E0130 18:20:23.800767 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:20:37 crc kubenswrapper[4712]: I0130 18:20:37.806155 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:20:37 crc kubenswrapper[4712]: E0130 18:20:37.806988 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:20:52 crc kubenswrapper[4712]: I0130 18:20:52.799573 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:20:52 crc kubenswrapper[4712]: E0130 18:20:52.800467 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:21:06 crc kubenswrapper[4712]: I0130 18:21:06.799931 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:21:06 crc kubenswrapper[4712]: E0130 18:21:06.800926 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:21:17 crc kubenswrapper[4712]: I0130 18:21:17.800175 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:21:17 crc kubenswrapper[4712]: E0130 18:21:17.801556 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:21:30 crc kubenswrapper[4712]: I0130 18:21:30.800082 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:21:30 crc kubenswrapper[4712]: E0130 18:21:30.802264 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:21:43 crc kubenswrapper[4712]: I0130 18:21:43.816320 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:21:43 crc kubenswrapper[4712]: E0130 18:21:43.817351 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.326473 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2dg4h"] Jan 30 18:21:45 crc kubenswrapper[4712]: E0130 18:21:45.327383 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bd24b2c-f1ed-45ed-a37f-d1c813be0529" containerName="collect-profiles" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.327404 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bd24b2c-f1ed-45ed-a37f-d1c813be0529" containerName="collect-profiles" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.327692 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bd24b2c-f1ed-45ed-a37f-d1c813be0529" containerName="collect-profiles" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.365775 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dg4h"] Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.365927 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.476192 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-catalog-content\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.476234 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hplz\" (UniqueName: \"kubernetes.io/projected/f0c9f825-8d6b-4ba0-88bd-725249b771b4-kube-api-access-5hplz\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.476330 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-utilities\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.578789 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-utilities\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.578995 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-catalog-content\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.579024 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5hplz\" (UniqueName: \"kubernetes.io/projected/f0c9f825-8d6b-4ba0-88bd-725249b771b4-kube-api-access-5hplz\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.580032 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-utilities\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.580317 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-catalog-content\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.603749 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hplz\" (UniqueName: \"kubernetes.io/projected/f0c9f825-8d6b-4ba0-88bd-725249b771b4-kube-api-access-5hplz\") pod \"redhat-marketplace-2dg4h\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:45 crc kubenswrapper[4712]: I0130 18:21:45.689448 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:21:46 crc kubenswrapper[4712]: I0130 18:21:46.169129 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dg4h"] Jan 30 18:21:46 crc kubenswrapper[4712]: I0130 18:21:46.694214 4712 generic.go:334] "Generic (PLEG): container finished" podID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerID="e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa" exitCode=0 Jan 30 18:21:46 crc kubenswrapper[4712]: I0130 18:21:46.694456 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerDied","Data":"e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa"} Jan 30 18:21:46 crc kubenswrapper[4712]: I0130 18:21:46.697171 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerStarted","Data":"b1978c716c4f12963d869d834ec1c4e8b6f409b67f0a6a6508a7aad15fe818ba"} Jan 30 18:21:46 crc kubenswrapper[4712]: I0130 18:21:46.711252 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.060436 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lvl5k"] Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.063372 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.181100 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-catalog-content\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.181432 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fzxk\" (UniqueName: \"kubernetes.io/projected/c963fbfd-7368-4d21-afa7-c97374117e6d-kube-api-access-2fzxk\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.181512 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-utilities\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.223847 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvl5k"] Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.283006 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-utilities\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.283237 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-catalog-content\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.283265 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fzxk\" (UniqueName: \"kubernetes.io/projected/c963fbfd-7368-4d21-afa7-c97374117e6d-kube-api-access-2fzxk\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.283991 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-utilities\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.284205 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-catalog-content\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.364083 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fzxk\" (UniqueName: \"kubernetes.io/projected/c963fbfd-7368-4d21-afa7-c97374117e6d-kube-api-access-2fzxk\") pod \"community-operators-lvl5k\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.388059 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:21:50 crc kubenswrapper[4712]: I0130 18:21:50.969379 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lvl5k"] Jan 30 18:21:50 crc kubenswrapper[4712]: W0130 18:21:50.973938 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc963fbfd_7368_4d21_afa7_c97374117e6d.slice/crio-1e108f308cad91722e8d60d39ab5f92627f2591651bc8ceaf5a028353f087af9 WatchSource:0}: Error finding container 1e108f308cad91722e8d60d39ab5f92627f2591651bc8ceaf5a028353f087af9: Status 404 returned error can't find the container with id 1e108f308cad91722e8d60d39ab5f92627f2591651bc8ceaf5a028353f087af9 Jan 30 18:21:51 crc kubenswrapper[4712]: I0130 18:21:51.761129 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerStarted","Data":"c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c"} Jan 30 18:21:51 crc kubenswrapper[4712]: I0130 18:21:51.765255 4712 generic.go:334] "Generic (PLEG): container finished" podID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerID="f2a9b4bfb3d045d532edcdaa5b53b5b8ccf36774a73a70e0e8d84f67bb709d79" exitCode=0 Jan 30 18:21:51 crc kubenswrapper[4712]: I0130 18:21:51.765375 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerDied","Data":"f2a9b4bfb3d045d532edcdaa5b53b5b8ccf36774a73a70e0e8d84f67bb709d79"} Jan 30 18:21:51 crc kubenswrapper[4712]: I0130 18:21:51.765508 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerStarted","Data":"1e108f308cad91722e8d60d39ab5f92627f2591651bc8ceaf5a028353f087af9"} Jan 30 18:21:54 crc kubenswrapper[4712]: I0130 18:21:54.801874 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:21:54 crc kubenswrapper[4712]: E0130 18:21:54.802507 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:21:55 crc kubenswrapper[4712]: I0130 18:21:55.813453 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerStarted","Data":"81ec93add773cdd9df29fdae761632c06294de43267e777a43796b4a6ece3d6a"} Jan 30 18:21:55 crc kubenswrapper[4712]: I0130 18:21:55.815163 4712 generic.go:334] "Generic (PLEG): container finished" podID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerID="c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c" exitCode=0 Jan 30 18:21:55 crc kubenswrapper[4712]: I0130 18:21:55.815192 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerDied","Data":"c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c"} Jan 30 18:22:05 crc kubenswrapper[4712]: I0130 18:22:05.800377 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:22:05 crc kubenswrapper[4712]: E0130 18:22:05.801377 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:22:11 crc kubenswrapper[4712]: I0130 18:22:11.322062 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerStarted","Data":"92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb"} Jan 30 18:22:12 crc kubenswrapper[4712]: I0130 18:22:12.361423 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2dg4h" podStartSLOduration=4.30911811 podStartE2EDuration="27.361402326s" podCreationTimestamp="2026-01-30 18:21:45 +0000 UTC" firstStartedPulling="2026-01-30 18:21:46.696438725 +0000 UTC m=+5243.603448194" lastFinishedPulling="2026-01-30 18:22:09.748722941 +0000 UTC m=+5266.655732410" observedRunningTime="2026-01-30 18:22:12.352348865 +0000 UTC m=+5269.259358344" watchObservedRunningTime="2026-01-30 18:22:12.361402326 +0000 UTC m=+5269.268411795" Jan 30 18:22:15 crc kubenswrapper[4712]: I0130 18:22:15.690629 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:22:15 crc kubenswrapper[4712]: I0130 18:22:15.692223 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:22:16 crc kubenswrapper[4712]: I0130 18:22:16.364258 4712 generic.go:334] "Generic (PLEG): container finished" podID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerID="81ec93add773cdd9df29fdae761632c06294de43267e777a43796b4a6ece3d6a" exitCode=0 Jan 30 18:22:16 crc kubenswrapper[4712]: I0130 18:22:16.364454 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerDied","Data":"81ec93add773cdd9df29fdae761632c06294de43267e777a43796b4a6ece3d6a"} Jan 30 18:22:16 crc kubenswrapper[4712]: I0130 18:22:16.735480 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2dg4h" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:16 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:16 crc kubenswrapper[4712]: > Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.490268 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zjql7"] Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.492983 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.509647 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjql7"] Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.602931 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-utilities\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.603254 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-catalog-content\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.603490 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8n7r\" (UniqueName: \"kubernetes.io/projected/cc23b185-1914-452b-96da-df52fba4612a-kube-api-access-h8n7r\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.705613 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8n7r\" (UniqueName: \"kubernetes.io/projected/cc23b185-1914-452b-96da-df52fba4612a-kube-api-access-h8n7r\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.705717 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-utilities\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.705752 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-catalog-content\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.738949 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-utilities\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.750204 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8n7r\" (UniqueName: \"kubernetes.io/projected/cc23b185-1914-452b-96da-df52fba4612a-kube-api-access-h8n7r\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.799897 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:22:18 crc kubenswrapper[4712]: E0130 18:22:18.800369 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:22:18 crc kubenswrapper[4712]: I0130 18:22:18.836390 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-catalog-content\") pod \"certified-operators-zjql7\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:19 crc kubenswrapper[4712]: I0130 18:22:19.112723 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:24 crc kubenswrapper[4712]: I0130 18:22:24.947173 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerStarted","Data":"bf370a71a6333447274d23a944cae66b8514d339a575cbef052b0b3d0b3a4df5"} Jan 30 18:22:25 crc kubenswrapper[4712]: I0130 18:22:25.593012 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lvl5k" podStartSLOduration=5.770840454 podStartE2EDuration="35.592996829s" podCreationTimestamp="2026-01-30 18:21:50 +0000 UTC" firstStartedPulling="2026-01-30 18:21:51.768203868 +0000 UTC m=+5248.675213337" lastFinishedPulling="2026-01-30 18:22:21.590360203 +0000 UTC m=+5278.497369712" observedRunningTime="2026-01-30 18:22:24.981025463 +0000 UTC m=+5281.888034932" watchObservedRunningTime="2026-01-30 18:22:25.592996829 +0000 UTC m=+5282.500006298" Jan 30 18:22:25 crc kubenswrapper[4712]: I0130 18:22:25.597545 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjql7"] Jan 30 18:22:25 crc kubenswrapper[4712]: I0130 18:22:25.958781 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerStarted","Data":"de4b3d67334d8a5192c82c4d16426a3a99c4719fa6d8e4799babf849e28f62af"} Jan 30 18:22:25 crc kubenswrapper[4712]: I0130 18:22:25.959144 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerStarted","Data":"2b5219fcda84f2e3f836a5fd084bf0eb39b20b402abadc829c23c339dcdbca26"} Jan 30 18:22:26 crc kubenswrapper[4712]: I0130 18:22:26.754557 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2dg4h" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:26 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:26 crc kubenswrapper[4712]: > Jan 30 18:22:26 crc kubenswrapper[4712]: I0130 18:22:26.969969 4712 generic.go:334] "Generic (PLEG): container finished" podID="cc23b185-1914-452b-96da-df52fba4612a" containerID="de4b3d67334d8a5192c82c4d16426a3a99c4719fa6d8e4799babf849e28f62af" exitCode=0 Jan 30 18:22:26 crc kubenswrapper[4712]: I0130 18:22:26.970046 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerDied","Data":"de4b3d67334d8a5192c82c4d16426a3a99c4719fa6d8e4799babf849e28f62af"} Jan 30 18:22:30 crc kubenswrapper[4712]: I0130 18:22:30.388856 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:22:30 crc kubenswrapper[4712]: I0130 18:22:30.389767 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:22:31 crc kubenswrapper[4712]: I0130 18:22:31.449813 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lvl5k" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:31 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:31 crc kubenswrapper[4712]: > Jan 30 18:22:31 crc kubenswrapper[4712]: I0130 18:22:31.800191 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:22:31 crc kubenswrapper[4712]: E0130 18:22:31.800532 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:22:32 crc kubenswrapper[4712]: I0130 18:22:32.025733 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerStarted","Data":"6af76a3819f8a488f9cdf2836cd18d4ddf0b6cb411f816a1af3a2d88277be936"} Jan 30 18:22:36 crc kubenswrapper[4712]: I0130 18:22:36.737420 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2dg4h" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:36 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:36 crc kubenswrapper[4712]: > Jan 30 18:22:41 crc kubenswrapper[4712]: I0130 18:22:41.153560 4712 generic.go:334] "Generic (PLEG): container finished" podID="cc23b185-1914-452b-96da-df52fba4612a" containerID="6af76a3819f8a488f9cdf2836cd18d4ddf0b6cb411f816a1af3a2d88277be936" exitCode=0 Jan 30 18:22:41 crc kubenswrapper[4712]: I0130 18:22:41.154057 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerDied","Data":"6af76a3819f8a488f9cdf2836cd18d4ddf0b6cb411f816a1af3a2d88277be936"} Jan 30 18:22:41 crc kubenswrapper[4712]: I0130 18:22:41.437349 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lvl5k" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:41 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:41 crc kubenswrapper[4712]: > Jan 30 18:22:43 crc kubenswrapper[4712]: I0130 18:22:43.187171 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerStarted","Data":"d821e149abb2b05d0ab2f324e417b3ef4127d9f1aea4ab0858f1c6b3564e6be6"} Jan 30 18:22:43 crc kubenswrapper[4712]: I0130 18:22:43.215666 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zjql7" podStartSLOduration=10.172385343 podStartE2EDuration="25.215649316s" podCreationTimestamp="2026-01-30 18:22:18 +0000 UTC" firstStartedPulling="2026-01-30 18:22:26.972464485 +0000 UTC m=+5283.879473954" lastFinishedPulling="2026-01-30 18:22:42.015728448 +0000 UTC m=+5298.922737927" observedRunningTime="2026-01-30 18:22:43.211007984 +0000 UTC m=+5300.118017453" watchObservedRunningTime="2026-01-30 18:22:43.215649316 +0000 UTC m=+5300.122658775" Jan 30 18:22:44 crc kubenswrapper[4712]: I0130 18:22:44.800412 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:22:44 crc kubenswrapper[4712]: E0130 18:22:44.800768 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:22:46 crc kubenswrapper[4712]: I0130 18:22:46.743471 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-2dg4h" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:46 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:46 crc kubenswrapper[4712]: > Jan 30 18:22:49 crc kubenswrapper[4712]: I0130 18:22:49.113971 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:49 crc kubenswrapper[4712]: I0130 18:22:49.114350 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:22:50 crc kubenswrapper[4712]: I0130 18:22:50.392020 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zjql7" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:50 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:50 crc kubenswrapper[4712]: > Jan 30 18:22:51 crc kubenswrapper[4712]: I0130 18:22:51.455226 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lvl5k" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" probeResult="failure" output=< Jan 30 18:22:51 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:22:51 crc kubenswrapper[4712]: > Jan 30 18:22:55 crc kubenswrapper[4712]: I0130 18:22:55.740102 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:22:55 crc kubenswrapper[4712]: I0130 18:22:55.803696 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:22:55 crc kubenswrapper[4712]: E0130 18:22:55.804256 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:22:55 crc kubenswrapper[4712]: I0130 18:22:55.817440 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:22:55 crc kubenswrapper[4712]: I0130 18:22:55.993976 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dg4h"] Jan 30 18:22:57 crc kubenswrapper[4712]: I0130 18:22:57.323905 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2dg4h" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" containerID="cri-o://92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb" gracePeriod=2 Jan 30 18:22:57 crc kubenswrapper[4712]: I0130 18:22:57.912546 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.074998 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-catalog-content\") pod \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.075360 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-utilities\") pod \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.075405 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hplz\" (UniqueName: \"kubernetes.io/projected/f0c9f825-8d6b-4ba0-88bd-725249b771b4-kube-api-access-5hplz\") pod \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\" (UID: \"f0c9f825-8d6b-4ba0-88bd-725249b771b4\") " Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.077281 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-utilities" (OuterVolumeSpecName: "utilities") pod "f0c9f825-8d6b-4ba0-88bd-725249b771b4" (UID: "f0c9f825-8d6b-4ba0-88bd-725249b771b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.087017 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c9f825-8d6b-4ba0-88bd-725249b771b4-kube-api-access-5hplz" (OuterVolumeSpecName: "kube-api-access-5hplz") pod "f0c9f825-8d6b-4ba0-88bd-725249b771b4" (UID: "f0c9f825-8d6b-4ba0-88bd-725249b771b4"). InnerVolumeSpecName "kube-api-access-5hplz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.106062 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0c9f825-8d6b-4ba0-88bd-725249b771b4" (UID: "f0c9f825-8d6b-4ba0-88bd-725249b771b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.178499 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.178548 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0c9f825-8d6b-4ba0-88bd-725249b771b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.178563 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hplz\" (UniqueName: \"kubernetes.io/projected/f0c9f825-8d6b-4ba0-88bd-725249b771b4-kube-api-access-5hplz\") on node \"crc\" DevicePath \"\"" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.335173 4712 generic.go:334] "Generic (PLEG): container finished" podID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerID="92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb" exitCode=0 Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.335254 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2dg4h" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.335239 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerDied","Data":"92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb"} Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.336337 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2dg4h" event={"ID":"f0c9f825-8d6b-4ba0-88bd-725249b771b4","Type":"ContainerDied","Data":"b1978c716c4f12963d869d834ec1c4e8b6f409b67f0a6a6508a7aad15fe818ba"} Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.336409 4712 scope.go:117] "RemoveContainer" containerID="92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.375948 4712 scope.go:117] "RemoveContainer" containerID="c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.379661 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dg4h"] Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.394713 4712 scope.go:117] "RemoveContainer" containerID="e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.410309 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2dg4h"] Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.448661 4712 scope.go:117] "RemoveContainer" containerID="92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb" Jan 30 18:22:58 crc kubenswrapper[4712]: E0130 18:22:58.449197 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb\": container with ID starting with 92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb not found: ID does not exist" containerID="92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.449251 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb"} err="failed to get container status \"92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb\": rpc error: code = NotFound desc = could not find container \"92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb\": container with ID starting with 92b581675061a3ea1ef41cedbed074310cb203f83263207bf1915c4512321dcb not found: ID does not exist" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.449274 4712 scope.go:117] "RemoveContainer" containerID="c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c" Jan 30 18:22:58 crc kubenswrapper[4712]: E0130 18:22:58.449595 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c\": container with ID starting with c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c not found: ID does not exist" containerID="c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.449635 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c"} err="failed to get container status \"c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c\": rpc error: code = NotFound desc = could not find container \"c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c\": container with ID starting with c8c13e73cf9b2f660007eb1d106f34efb72f3ce021ae30617d8e5fb19e4c3c6c not found: ID does not exist" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.449649 4712 scope.go:117] "RemoveContainer" containerID="e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa" Jan 30 18:22:58 crc kubenswrapper[4712]: E0130 18:22:58.450002 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa\": container with ID starting with e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa not found: ID does not exist" containerID="e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa" Jan 30 18:22:58 crc kubenswrapper[4712]: I0130 18:22:58.450035 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa"} err="failed to get container status \"e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa\": rpc error: code = NotFound desc = could not find container \"e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa\": container with ID starting with e685911ce1e3721ff19ac27d1b3fe35741545ec59dfe2dd05bd4ac98d4d8b0fa not found: ID does not exist" Jan 30 18:22:59 crc kubenswrapper[4712]: I0130 18:22:59.819063 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" path="/var/lib/kubelet/pods/f0c9f825-8d6b-4ba0-88bd-725249b771b4/volumes" Jan 30 18:23:00 crc kubenswrapper[4712]: I0130 18:23:00.155978 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zjql7" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="registry-server" probeResult="failure" output=< Jan 30 18:23:00 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:23:00 crc kubenswrapper[4712]: > Jan 30 18:23:00 crc kubenswrapper[4712]: I0130 18:23:00.453958 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:23:00 crc kubenswrapper[4712]: I0130 18:23:00.538163 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:23:01 crc kubenswrapper[4712]: I0130 18:23:01.388816 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvl5k"] Jan 30 18:23:02 crc kubenswrapper[4712]: I0130 18:23:02.371873 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lvl5k" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" containerID="cri-o://bf370a71a6333447274d23a944cae66b8514d339a575cbef052b0b3d0b3a4df5" gracePeriod=2 Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.389682 4712 generic.go:334] "Generic (PLEG): container finished" podID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerID="bf370a71a6333447274d23a944cae66b8514d339a575cbef052b0b3d0b3a4df5" exitCode=0 Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.389725 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerDied","Data":"bf370a71a6333447274d23a944cae66b8514d339a575cbef052b0b3d0b3a4df5"} Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.812116 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.996060 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-utilities\") pod \"c963fbfd-7368-4d21-afa7-c97374117e6d\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.996187 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-catalog-content\") pod \"c963fbfd-7368-4d21-afa7-c97374117e6d\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.996302 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fzxk\" (UniqueName: \"kubernetes.io/projected/c963fbfd-7368-4d21-afa7-c97374117e6d-kube-api-access-2fzxk\") pod \"c963fbfd-7368-4d21-afa7-c97374117e6d\" (UID: \"c963fbfd-7368-4d21-afa7-c97374117e6d\") " Jan 30 18:23:03 crc kubenswrapper[4712]: I0130 18:23:03.997307 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-utilities" (OuterVolumeSpecName: "utilities") pod "c963fbfd-7368-4d21-afa7-c97374117e6d" (UID: "c963fbfd-7368-4d21-afa7-c97374117e6d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.005065 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c963fbfd-7368-4d21-afa7-c97374117e6d-kube-api-access-2fzxk" (OuterVolumeSpecName: "kube-api-access-2fzxk") pod "c963fbfd-7368-4d21-afa7-c97374117e6d" (UID: "c963fbfd-7368-4d21-afa7-c97374117e6d"). InnerVolumeSpecName "kube-api-access-2fzxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.049191 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c963fbfd-7368-4d21-afa7-c97374117e6d" (UID: "c963fbfd-7368-4d21-afa7-c97374117e6d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.099624 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.099666 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fzxk\" (UniqueName: \"kubernetes.io/projected/c963fbfd-7368-4d21-afa7-c97374117e6d-kube-api-access-2fzxk\") on node \"crc\" DevicePath \"\"" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.099683 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c963fbfd-7368-4d21-afa7-c97374117e6d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.401272 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lvl5k" event={"ID":"c963fbfd-7368-4d21-afa7-c97374117e6d","Type":"ContainerDied","Data":"1e108f308cad91722e8d60d39ab5f92627f2591651bc8ceaf5a028353f087af9"} Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.402470 4712 scope.go:117] "RemoveContainer" containerID="bf370a71a6333447274d23a944cae66b8514d339a575cbef052b0b3d0b3a4df5" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.401334 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lvl5k" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.431464 4712 scope.go:117] "RemoveContainer" containerID="81ec93add773cdd9df29fdae761632c06294de43267e777a43796b4a6ece3d6a" Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.447966 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lvl5k"] Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.460817 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lvl5k"] Jan 30 18:23:04 crc kubenswrapper[4712]: I0130 18:23:04.467141 4712 scope.go:117] "RemoveContainer" containerID="f2a9b4bfb3d045d532edcdaa5b53b5b8ccf36774a73a70e0e8d84f67bb709d79" Jan 30 18:23:05 crc kubenswrapper[4712]: I0130 18:23:05.818182 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" path="/var/lib/kubelet/pods/c963fbfd-7368-4d21-afa7-c97374117e6d/volumes" Jan 30 18:23:09 crc kubenswrapper[4712]: I0130 18:23:09.159822 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:23:09 crc kubenswrapper[4712]: I0130 18:23:09.212881 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:23:09 crc kubenswrapper[4712]: I0130 18:23:09.402516 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjql7"] Jan 30 18:23:09 crc kubenswrapper[4712]: I0130 18:23:09.799834 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:23:09 crc kubenswrapper[4712]: E0130 18:23:09.800090 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:23:10 crc kubenswrapper[4712]: I0130 18:23:10.461313 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zjql7" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="registry-server" containerID="cri-o://d821e149abb2b05d0ab2f324e417b3ef4127d9f1aea4ab0858f1c6b3564e6be6" gracePeriod=2 Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.477548 4712 generic.go:334] "Generic (PLEG): container finished" podID="cc23b185-1914-452b-96da-df52fba4612a" containerID="d821e149abb2b05d0ab2f324e417b3ef4127d9f1aea4ab0858f1c6b3564e6be6" exitCode=0 Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.478992 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerDied","Data":"d821e149abb2b05d0ab2f324e417b3ef4127d9f1aea4ab0858f1c6b3564e6be6"} Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.754022 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.896153 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-catalog-content\") pod \"cc23b185-1914-452b-96da-df52fba4612a\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.896213 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8n7r\" (UniqueName: \"kubernetes.io/projected/cc23b185-1914-452b-96da-df52fba4612a-kube-api-access-h8n7r\") pod \"cc23b185-1914-452b-96da-df52fba4612a\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.896361 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-utilities\") pod \"cc23b185-1914-452b-96da-df52fba4612a\" (UID: \"cc23b185-1914-452b-96da-df52fba4612a\") " Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.897014 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-utilities" (OuterVolumeSpecName: "utilities") pod "cc23b185-1914-452b-96da-df52fba4612a" (UID: "cc23b185-1914-452b-96da-df52fba4612a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.902969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc23b185-1914-452b-96da-df52fba4612a-kube-api-access-h8n7r" (OuterVolumeSpecName: "kube-api-access-h8n7r") pod "cc23b185-1914-452b-96da-df52fba4612a" (UID: "cc23b185-1914-452b-96da-df52fba4612a"). InnerVolumeSpecName "kube-api-access-h8n7r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.948926 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc23b185-1914-452b-96da-df52fba4612a" (UID: "cc23b185-1914-452b-96da-df52fba4612a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.998940 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.998971 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h8n7r\" (UniqueName: \"kubernetes.io/projected/cc23b185-1914-452b-96da-df52fba4612a-kube-api-access-h8n7r\") on node \"crc\" DevicePath \"\"" Jan 30 18:23:11 crc kubenswrapper[4712]: I0130 18:23:11.998982 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc23b185-1914-452b-96da-df52fba4612a-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.508385 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjql7" event={"ID":"cc23b185-1914-452b-96da-df52fba4612a","Type":"ContainerDied","Data":"2b5219fcda84f2e3f836a5fd084bf0eb39b20b402abadc829c23c339dcdbca26"} Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.508774 4712 scope.go:117] "RemoveContainer" containerID="d821e149abb2b05d0ab2f324e417b3ef4127d9f1aea4ab0858f1c6b3564e6be6" Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.509895 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjql7" Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.547236 4712 scope.go:117] "RemoveContainer" containerID="6af76a3819f8a488f9cdf2836cd18d4ddf0b6cb411f816a1af3a2d88277be936" Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.555856 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjql7"] Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.565782 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zjql7"] Jan 30 18:23:12 crc kubenswrapper[4712]: I0130 18:23:12.667826 4712 scope.go:117] "RemoveContainer" containerID="de4b3d67334d8a5192c82c4d16426a3a99c4719fa6d8e4799babf849e28f62af" Jan 30 18:23:13 crc kubenswrapper[4712]: I0130 18:23:13.812028 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc23b185-1914-452b-96da-df52fba4612a" path="/var/lib/kubelet/pods/cc23b185-1914-452b-96da-df52fba4612a/volumes" Jan 30 18:23:22 crc kubenswrapper[4712]: I0130 18:23:22.800318 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:23:22 crc kubenswrapper[4712]: E0130 18:23:22.801895 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:23:37 crc kubenswrapper[4712]: I0130 18:23:37.799544 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:23:37 crc kubenswrapper[4712]: E0130 18:23:37.800217 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:23:49 crc kubenswrapper[4712]: I0130 18:23:49.799773 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:23:49 crc kubenswrapper[4712]: E0130 18:23:49.801050 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:24:01 crc kubenswrapper[4712]: I0130 18:24:01.800337 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:24:01 crc kubenswrapper[4712]: E0130 18:24:01.801311 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:24:12 crc kubenswrapper[4712]: I0130 18:24:12.800458 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:24:14 crc kubenswrapper[4712]: I0130 18:24:14.119838 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"19cd334db0c07e9254330b34d193469bef11596397bc0aa84782ec7894bac5b8"} Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.899460 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dwt29"] Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900337 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="extract-utilities" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900355 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="extract-utilities" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900377 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900386 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900400 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="extract-utilities" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900407 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="extract-utilities" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900431 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="extract-content" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900437 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="extract-content" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900450 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="extract-utilities" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900458 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="extract-utilities" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900469 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900474 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900482 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="extract-content" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900488 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="extract-content" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900494 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="extract-content" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900500 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="extract-content" Jan 30 18:25:23 crc kubenswrapper[4712]: E0130 18:25:23.900512 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900517 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900685 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c9f825-8d6b-4ba0-88bd-725249b771b4" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900707 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc23b185-1914-452b-96da-df52fba4612a" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.900719 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c963fbfd-7368-4d21-afa7-c97374117e6d" containerName="registry-server" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.908615 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:23 crc kubenswrapper[4712]: I0130 18:25:23.927940 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwt29"] Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.099930 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtsff\" (UniqueName: \"kubernetes.io/projected/1958e74d-075f-44d3-a2dd-4cce43c764c4-kube-api-access-dtsff\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.099986 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-catalog-content\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.100357 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-utilities\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.202677 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-utilities\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.202967 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtsff\" (UniqueName: \"kubernetes.io/projected/1958e74d-075f-44d3-a2dd-4cce43c764c4-kube-api-access-dtsff\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.203007 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-catalog-content\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.203257 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-utilities\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.203776 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-catalog-content\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.231699 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtsff\" (UniqueName: \"kubernetes.io/projected/1958e74d-075f-44d3-a2dd-4cce43c764c4-kube-api-access-dtsff\") pod \"redhat-operators-dwt29\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.240720 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:24 crc kubenswrapper[4712]: I0130 18:25:24.786516 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dwt29"] Jan 30 18:25:25 crc kubenswrapper[4712]: I0130 18:25:25.865222 4712 generic.go:334] "Generic (PLEG): container finished" podID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerID="caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4" exitCode=0 Jan 30 18:25:25 crc kubenswrapper[4712]: I0130 18:25:25.865308 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerDied","Data":"caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4"} Jan 30 18:25:25 crc kubenswrapper[4712]: I0130 18:25:25.865468 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerStarted","Data":"4a8c1b0de5f62820d04bc5e08a9574b97c87ed8ddd541628a17ea5a66a729302"} Jan 30 18:25:27 crc kubenswrapper[4712]: I0130 18:25:27.887165 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerStarted","Data":"7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1"} Jan 30 18:25:40 crc kubenswrapper[4712]: I0130 18:25:40.033869 4712 generic.go:334] "Generic (PLEG): container finished" podID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerID="7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1" exitCode=0 Jan 30 18:25:40 crc kubenswrapper[4712]: I0130 18:25:40.034032 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerDied","Data":"7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1"} Jan 30 18:25:42 crc kubenswrapper[4712]: I0130 18:25:42.054776 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerStarted","Data":"26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31"} Jan 30 18:25:42 crc kubenswrapper[4712]: I0130 18:25:42.083025 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dwt29" podStartSLOduration=4.151733703 podStartE2EDuration="19.082981051s" podCreationTimestamp="2026-01-30 18:25:23 +0000 UTC" firstStartedPulling="2026-01-30 18:25:25.86735114 +0000 UTC m=+5462.774360609" lastFinishedPulling="2026-01-30 18:25:40.798598468 +0000 UTC m=+5477.705607957" observedRunningTime="2026-01-30 18:25:42.0747022 +0000 UTC m=+5478.981711679" watchObservedRunningTime="2026-01-30 18:25:42.082981051 +0000 UTC m=+5478.989990550" Jan 30 18:25:44 crc kubenswrapper[4712]: I0130 18:25:44.241790 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:44 crc kubenswrapper[4712]: I0130 18:25:44.243378 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:25:45 crc kubenswrapper[4712]: I0130 18:25:45.293925 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:25:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:25:45 crc kubenswrapper[4712]: > Jan 30 18:25:55 crc kubenswrapper[4712]: I0130 18:25:55.292579 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:25:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:25:55 crc kubenswrapper[4712]: > Jan 30 18:26:05 crc kubenswrapper[4712]: I0130 18:26:05.297445 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:26:05 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:26:05 crc kubenswrapper[4712]: > Jan 30 18:26:15 crc kubenswrapper[4712]: I0130 18:26:15.289879 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:26:15 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:26:15 crc kubenswrapper[4712]: > Jan 30 18:26:25 crc kubenswrapper[4712]: I0130 18:26:25.310658 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:26:25 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:26:25 crc kubenswrapper[4712]: > Jan 30 18:26:35 crc kubenswrapper[4712]: I0130 18:26:35.292113 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:26:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:26:35 crc kubenswrapper[4712]: > Jan 30 18:26:36 crc kubenswrapper[4712]: I0130 18:26:36.271116 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:26:36 crc kubenswrapper[4712]: I0130 18:26:36.272330 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:26:44 crc kubenswrapper[4712]: I0130 18:26:44.290648 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:26:44 crc kubenswrapper[4712]: I0130 18:26:44.353641 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:26:44 crc kubenswrapper[4712]: I0130 18:26:44.528955 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dwt29"] Jan 30 18:26:45 crc kubenswrapper[4712]: I0130 18:26:45.702951 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dwt29" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" containerID="cri-o://26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31" gracePeriod=2 Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.601955 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.711224 4712 generic.go:334] "Generic (PLEG): container finished" podID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerID="26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31" exitCode=0 Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.711276 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dwt29" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.711269 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerDied","Data":"26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31"} Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.711485 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dwt29" event={"ID":"1958e74d-075f-44d3-a2dd-4cce43c764c4","Type":"ContainerDied","Data":"4a8c1b0de5f62820d04bc5e08a9574b97c87ed8ddd541628a17ea5a66a729302"} Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.711528 4712 scope.go:117] "RemoveContainer" containerID="26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.729430 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-utilities\") pod \"1958e74d-075f-44d3-a2dd-4cce43c764c4\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.729586 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtsff\" (UniqueName: \"kubernetes.io/projected/1958e74d-075f-44d3-a2dd-4cce43c764c4-kube-api-access-dtsff\") pod \"1958e74d-075f-44d3-a2dd-4cce43c764c4\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.729712 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-catalog-content\") pod \"1958e74d-075f-44d3-a2dd-4cce43c764c4\" (UID: \"1958e74d-075f-44d3-a2dd-4cce43c764c4\") " Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.730601 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-utilities" (OuterVolumeSpecName: "utilities") pod "1958e74d-075f-44d3-a2dd-4cce43c764c4" (UID: "1958e74d-075f-44d3-a2dd-4cce43c764c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.748526 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1958e74d-075f-44d3-a2dd-4cce43c764c4-kube-api-access-dtsff" (OuterVolumeSpecName: "kube-api-access-dtsff") pod "1958e74d-075f-44d3-a2dd-4cce43c764c4" (UID: "1958e74d-075f-44d3-a2dd-4cce43c764c4"). InnerVolumeSpecName "kube-api-access-dtsff". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.752548 4712 scope.go:117] "RemoveContainer" containerID="7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.836504 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.836866 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtsff\" (UniqueName: \"kubernetes.io/projected/1958e74d-075f-44d3-a2dd-4cce43c764c4-kube-api-access-dtsff\") on node \"crc\" DevicePath \"\"" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.837018 4712 scope.go:117] "RemoveContainer" containerID="caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.870483 4712 scope.go:117] "RemoveContainer" containerID="26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31" Jan 30 18:26:46 crc kubenswrapper[4712]: E0130 18:26:46.870882 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31\": container with ID starting with 26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31 not found: ID does not exist" containerID="26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.870923 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31"} err="failed to get container status \"26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31\": rpc error: code = NotFound desc = could not find container \"26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31\": container with ID starting with 26672f014a71093c32ac12fe44dd270fda348732cc0f24860dac4d367440db31 not found: ID does not exist" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.870950 4712 scope.go:117] "RemoveContainer" containerID="7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1" Jan 30 18:26:46 crc kubenswrapper[4712]: E0130 18:26:46.871359 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1\": container with ID starting with 7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1 not found: ID does not exist" containerID="7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.871398 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1"} err="failed to get container status \"7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1\": rpc error: code = NotFound desc = could not find container \"7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1\": container with ID starting with 7ad5be31a253b3773728edb6c6515a14151757d496f776aa9a384a629ff9f9d1 not found: ID does not exist" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.871418 4712 scope.go:117] "RemoveContainer" containerID="caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4" Jan 30 18:26:46 crc kubenswrapper[4712]: E0130 18:26:46.871737 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4\": container with ID starting with caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4 not found: ID does not exist" containerID="caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.871783 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4"} err="failed to get container status \"caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4\": rpc error: code = NotFound desc = could not find container \"caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4\": container with ID starting with caf422974e0f16b19d947fb5a4a0ff09d62b89427a026fd09001fa8c337225c4 not found: ID does not exist" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.913996 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1958e74d-075f-44d3-a2dd-4cce43c764c4" (UID: "1958e74d-075f-44d3-a2dd-4cce43c764c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:26:46 crc kubenswrapper[4712]: I0130 18:26:46.938494 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1958e74d-075f-44d3-a2dd-4cce43c764c4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:26:47 crc kubenswrapper[4712]: I0130 18:26:47.049140 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dwt29"] Jan 30 18:26:47 crc kubenswrapper[4712]: I0130 18:26:47.062143 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dwt29"] Jan 30 18:26:47 crc kubenswrapper[4712]: I0130 18:26:47.814378 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" path="/var/lib/kubelet/pods/1958e74d-075f-44d3-a2dd-4cce43c764c4/volumes" Jan 30 18:27:06 crc kubenswrapper[4712]: I0130 18:27:06.271242 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:27:06 crc kubenswrapper[4712]: I0130 18:27:06.273715 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:27:36 crc kubenswrapper[4712]: I0130 18:27:36.272168 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:27:36 crc kubenswrapper[4712]: I0130 18:27:36.277560 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:27:36 crc kubenswrapper[4712]: I0130 18:27:36.277921 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:27:36 crc kubenswrapper[4712]: I0130 18:27:36.281911 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"19cd334db0c07e9254330b34d193469bef11596397bc0aa84782ec7894bac5b8"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:27:36 crc kubenswrapper[4712]: I0130 18:27:36.282210 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://19cd334db0c07e9254330b34d193469bef11596397bc0aa84782ec7894bac5b8" gracePeriod=600 Jan 30 18:27:37 crc kubenswrapper[4712]: I0130 18:27:37.370788 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="19cd334db0c07e9254330b34d193469bef11596397bc0aa84782ec7894bac5b8" exitCode=0 Jan 30 18:27:37 crc kubenswrapper[4712]: I0130 18:27:37.370876 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"19cd334db0c07e9254330b34d193469bef11596397bc0aa84782ec7894bac5b8"} Jan 30 18:27:37 crc kubenswrapper[4712]: I0130 18:27:37.371391 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23"} Jan 30 18:27:37 crc kubenswrapper[4712]: I0130 18:27:37.371419 4712 scope.go:117] "RemoveContainer" containerID="76577cffe485f3a449b32da5d60588a8b4d1ef9c0eb69faaa1476746a138bdd4" Jan 30 18:29:36 crc kubenswrapper[4712]: I0130 18:29:36.271247 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:29:36 crc kubenswrapper[4712]: I0130 18:29:36.271947 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.189196 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7"] Jan 30 18:30:00 crc kubenswrapper[4712]: E0130 18:30:00.190253 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.190272 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" Jan 30 18:30:00 crc kubenswrapper[4712]: E0130 18:30:00.190305 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="extract-utilities" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.190313 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="extract-utilities" Jan 30 18:30:00 crc kubenswrapper[4712]: E0130 18:30:00.190340 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="extract-content" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.190350 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="extract-content" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.190562 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1958e74d-075f-44d3-a2dd-4cce43c764c4" containerName="registry-server" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.192996 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.202518 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.202522 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.205005 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7"] Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.305851 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7016a028-3d59-4c19-af25-90d601a927fe-config-volume\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.305995 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjl8j\" (UniqueName: \"kubernetes.io/projected/7016a028-3d59-4c19-af25-90d601a927fe-kube-api-access-fjl8j\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.306031 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7016a028-3d59-4c19-af25-90d601a927fe-secret-volume\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.407304 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7016a028-3d59-4c19-af25-90d601a927fe-config-volume\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.407466 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fjl8j\" (UniqueName: \"kubernetes.io/projected/7016a028-3d59-4c19-af25-90d601a927fe-kube-api-access-fjl8j\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.407507 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7016a028-3d59-4c19-af25-90d601a927fe-secret-volume\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.408152 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7016a028-3d59-4c19-af25-90d601a927fe-config-volume\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.416376 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7016a028-3d59-4c19-af25-90d601a927fe-secret-volume\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.422162 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fjl8j\" (UniqueName: \"kubernetes.io/projected/7016a028-3d59-4c19-af25-90d601a927fe-kube-api-access-fjl8j\") pod \"collect-profiles-29496630-2xjc7\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:00 crc kubenswrapper[4712]: I0130 18:30:00.526346 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:01 crc kubenswrapper[4712]: I0130 18:30:01.062567 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7"] Jan 30 18:30:02 crc kubenswrapper[4712]: I0130 18:30:02.044307 4712 generic.go:334] "Generic (PLEG): container finished" podID="7016a028-3d59-4c19-af25-90d601a927fe" containerID="3141a92b7f63f2d4fdb2a8084d09bb950a9dc6f02f6dbe982010ac4cc721e7bf" exitCode=0 Jan 30 18:30:02 crc kubenswrapper[4712]: I0130 18:30:02.044409 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" event={"ID":"7016a028-3d59-4c19-af25-90d601a927fe","Type":"ContainerDied","Data":"3141a92b7f63f2d4fdb2a8084d09bb950a9dc6f02f6dbe982010ac4cc721e7bf"} Jan 30 18:30:02 crc kubenswrapper[4712]: I0130 18:30:02.044653 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" event={"ID":"7016a028-3d59-4c19-af25-90d601a927fe","Type":"ContainerStarted","Data":"87b60cfa60945b3eaa9abb91e97dfad93420191daa77b2acc825175fe76ab014"} Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.399224 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.570965 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7016a028-3d59-4c19-af25-90d601a927fe-config-volume\") pod \"7016a028-3d59-4c19-af25-90d601a927fe\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.571232 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fjl8j\" (UniqueName: \"kubernetes.io/projected/7016a028-3d59-4c19-af25-90d601a927fe-kube-api-access-fjl8j\") pod \"7016a028-3d59-4c19-af25-90d601a927fe\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.571268 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7016a028-3d59-4c19-af25-90d601a927fe-secret-volume\") pod \"7016a028-3d59-4c19-af25-90d601a927fe\" (UID: \"7016a028-3d59-4c19-af25-90d601a927fe\") " Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.571859 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7016a028-3d59-4c19-af25-90d601a927fe-config-volume" (OuterVolumeSpecName: "config-volume") pod "7016a028-3d59-4c19-af25-90d601a927fe" (UID: "7016a028-3d59-4c19-af25-90d601a927fe"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.578489 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7016a028-3d59-4c19-af25-90d601a927fe-kube-api-access-fjl8j" (OuterVolumeSpecName: "kube-api-access-fjl8j") pod "7016a028-3d59-4c19-af25-90d601a927fe" (UID: "7016a028-3d59-4c19-af25-90d601a927fe"). InnerVolumeSpecName "kube-api-access-fjl8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.578746 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7016a028-3d59-4c19-af25-90d601a927fe-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7016a028-3d59-4c19-af25-90d601a927fe" (UID: "7016a028-3d59-4c19-af25-90d601a927fe"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.673915 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fjl8j\" (UniqueName: \"kubernetes.io/projected/7016a028-3d59-4c19-af25-90d601a927fe-kube-api-access-fjl8j\") on node \"crc\" DevicePath \"\"" Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.673945 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7016a028-3d59-4c19-af25-90d601a927fe-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:30:03 crc kubenswrapper[4712]: I0130 18:30:03.673954 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7016a028-3d59-4c19-af25-90d601a927fe-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:30:04 crc kubenswrapper[4712]: I0130 18:30:04.062020 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" event={"ID":"7016a028-3d59-4c19-af25-90d601a927fe","Type":"ContainerDied","Data":"87b60cfa60945b3eaa9abb91e97dfad93420191daa77b2acc825175fe76ab014"} Jan 30 18:30:04 crc kubenswrapper[4712]: I0130 18:30:04.062140 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7" Jan 30 18:30:04 crc kubenswrapper[4712]: I0130 18:30:04.062255 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87b60cfa60945b3eaa9abb91e97dfad93420191daa77b2acc825175fe76ab014" Jan 30 18:30:04 crc kubenswrapper[4712]: I0130 18:30:04.523991 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj"] Jan 30 18:30:04 crc kubenswrapper[4712]: I0130 18:30:04.543385 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496585-x2cfj"] Jan 30 18:30:05 crc kubenswrapper[4712]: I0130 18:30:05.823310 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdc7d161-1ea0-4608-857c-d4c466e90f97" path="/var/lib/kubelet/pods/bdc7d161-1ea0-4608-857c-d4c466e90f97/volumes" Jan 30 18:30:06 crc kubenswrapper[4712]: I0130 18:30:06.275222 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:30:06 crc kubenswrapper[4712]: I0130 18:30:06.275588 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:30:36 crc kubenswrapper[4712]: I0130 18:30:36.271042 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:30:36 crc kubenswrapper[4712]: I0130 18:30:36.273455 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:30:36 crc kubenswrapper[4712]: I0130 18:30:36.273512 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:30:36 crc kubenswrapper[4712]: I0130 18:30:36.274968 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:30:36 crc kubenswrapper[4712]: I0130 18:30:36.275034 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" gracePeriod=600 Jan 30 18:30:36 crc kubenswrapper[4712]: E0130 18:30:36.413597 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:30:37 crc kubenswrapper[4712]: I0130 18:30:37.391696 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" exitCode=0 Jan 30 18:30:37 crc kubenswrapper[4712]: I0130 18:30:37.391747 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23"} Jan 30 18:30:37 crc kubenswrapper[4712]: I0130 18:30:37.391846 4712 scope.go:117] "RemoveContainer" containerID="19cd334db0c07e9254330b34d193469bef11596397bc0aa84782ec7894bac5b8" Jan 30 18:30:37 crc kubenswrapper[4712]: I0130 18:30:37.393104 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:30:37 crc kubenswrapper[4712]: E0130 18:30:37.393480 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:30:47 crc kubenswrapper[4712]: E0130 18:30:47.255552 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Jan 30 18:30:51 crc kubenswrapper[4712]: I0130 18:30:51.800453 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:30:51 crc kubenswrapper[4712]: E0130 18:30:51.801059 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:31:02 crc kubenswrapper[4712]: I0130 18:31:02.802901 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:31:02 crc kubenswrapper[4712]: E0130 18:31:02.803784 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:31:02 crc kubenswrapper[4712]: I0130 18:31:02.861675 4712 scope.go:117] "RemoveContainer" containerID="db61a762e5f3cfe2e14bdba4fde2c01d0ad75327e7cfe193de95fa0ca158fd53" Jan 30 18:31:13 crc kubenswrapper[4712]: I0130 18:31:13.820715 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:31:13 crc kubenswrapper[4712]: E0130 18:31:13.822271 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:31:26 crc kubenswrapper[4712]: I0130 18:31:26.800005 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:31:26 crc kubenswrapper[4712]: E0130 18:31:26.801019 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:31:39 crc kubenswrapper[4712]: I0130 18:31:39.800265 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:31:39 crc kubenswrapper[4712]: E0130 18:31:39.801083 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:31:51 crc kubenswrapper[4712]: I0130 18:31:51.803048 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:31:51 crc kubenswrapper[4712]: E0130 18:31:51.803830 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:32:03 crc kubenswrapper[4712]: I0130 18:32:03.808191 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:32:03 crc kubenswrapper[4712]: E0130 18:32:03.808972 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:32:15 crc kubenswrapper[4712]: I0130 18:32:15.799764 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:32:15 crc kubenswrapper[4712]: E0130 18:32:15.801682 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:32:26 crc kubenswrapper[4712]: I0130 18:32:26.800046 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:32:26 crc kubenswrapper[4712]: E0130 18:32:26.801040 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:32:41 crc kubenswrapper[4712]: I0130 18:32:41.800039 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:32:41 crc kubenswrapper[4712]: E0130 18:32:41.801016 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:32:55 crc kubenswrapper[4712]: I0130 18:32:55.800816 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:32:55 crc kubenswrapper[4712]: E0130 18:32:55.801695 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:33:10 crc kubenswrapper[4712]: I0130 18:33:10.800657 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:33:10 crc kubenswrapper[4712]: E0130 18:33:10.801572 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:33:25 crc kubenswrapper[4712]: I0130 18:33:25.800173 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:33:25 crc kubenswrapper[4712]: E0130 18:33:25.801077 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.519285 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tcr2n"] Jan 30 18:33:34 crc kubenswrapper[4712]: E0130 18:33:34.520498 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7016a028-3d59-4c19-af25-90d601a927fe" containerName="collect-profiles" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.520524 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7016a028-3d59-4c19-af25-90d601a927fe" containerName="collect-profiles" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.520861 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="7016a028-3d59-4c19-af25-90d601a927fe" containerName="collect-profiles" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.523045 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.545396 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tcr2n"] Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.702930 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cb8hm\" (UniqueName: \"kubernetes.io/projected/901d6339-5954-4ecd-8202-dba7b7e2873f-kube-api-access-cb8hm\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.703280 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-catalog-content\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.703307 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-utilities\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.805031 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cb8hm\" (UniqueName: \"kubernetes.io/projected/901d6339-5954-4ecd-8202-dba7b7e2873f-kube-api-access-cb8hm\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.805204 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-catalog-content\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.805231 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-utilities\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.805743 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-catalog-content\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.805959 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-utilities\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.825624 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cb8hm\" (UniqueName: \"kubernetes.io/projected/901d6339-5954-4ecd-8202-dba7b7e2873f-kube-api-access-cb8hm\") pod \"redhat-marketplace-tcr2n\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:34 crc kubenswrapper[4712]: I0130 18:33:34.849551 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:35 crc kubenswrapper[4712]: I0130 18:33:35.362314 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tcr2n"] Jan 30 18:33:36 crc kubenswrapper[4712]: I0130 18:33:36.158887 4712 generic.go:334] "Generic (PLEG): container finished" podID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerID="5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51" exitCode=0 Jan 30 18:33:36 crc kubenswrapper[4712]: I0130 18:33:36.159242 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerDied","Data":"5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51"} Jan 30 18:33:36 crc kubenswrapper[4712]: I0130 18:33:36.159303 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerStarted","Data":"fa6054ee2fecf83517f52fd4223865a94208bf3374ea519d87cc113335242a9f"} Jan 30 18:33:36 crc kubenswrapper[4712]: I0130 18:33:36.167581 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:33:38 crc kubenswrapper[4712]: I0130 18:33:38.178751 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerStarted","Data":"404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f"} Jan 30 18:33:39 crc kubenswrapper[4712]: I0130 18:33:39.188838 4712 generic.go:334] "Generic (PLEG): container finished" podID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerID="404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f" exitCode=0 Jan 30 18:33:39 crc kubenswrapper[4712]: I0130 18:33:39.188878 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerDied","Data":"404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f"} Jan 30 18:33:39 crc kubenswrapper[4712]: I0130 18:33:39.801065 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:33:39 crc kubenswrapper[4712]: E0130 18:33:39.801640 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:33:40 crc kubenswrapper[4712]: I0130 18:33:40.200925 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerStarted","Data":"ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b"} Jan 30 18:33:40 crc kubenswrapper[4712]: I0130 18:33:40.233383 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tcr2n" podStartSLOduration=2.7736740109999998 podStartE2EDuration="6.233356966s" podCreationTimestamp="2026-01-30 18:33:34 +0000 UTC" firstStartedPulling="2026-01-30 18:33:36.165784066 +0000 UTC m=+5953.072793575" lastFinishedPulling="2026-01-30 18:33:39.625467051 +0000 UTC m=+5956.532476530" observedRunningTime="2026-01-30 18:33:40.226169731 +0000 UTC m=+5957.133179200" watchObservedRunningTime="2026-01-30 18:33:40.233356966 +0000 UTC m=+5957.140366445" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.447366 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dqs6f"] Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.450390 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.465171 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqs6f"] Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.492039 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-utilities\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.492729 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vw5x2\" (UniqueName: \"kubernetes.io/projected/2bea2e47-1aef-4872-9829-928e13c48c04-kube-api-access-vw5x2\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.492933 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-catalog-content\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.595153 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vw5x2\" (UniqueName: \"kubernetes.io/projected/2bea2e47-1aef-4872-9829-928e13c48c04-kube-api-access-vw5x2\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.595233 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-catalog-content\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.595298 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-utilities\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.595988 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-utilities\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.596113 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-catalog-content\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.620813 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vw5x2\" (UniqueName: \"kubernetes.io/projected/2bea2e47-1aef-4872-9829-928e13c48c04-kube-api-access-vw5x2\") pod \"community-operators-dqs6f\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.771302 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.850881 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:44 crc kubenswrapper[4712]: I0130 18:33:44.854703 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:45 crc kubenswrapper[4712]: I0130 18:33:45.462727 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dqs6f"] Jan 30 18:33:45 crc kubenswrapper[4712]: I0130 18:33:45.967521 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-tcr2n" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="registry-server" probeResult="failure" output=< Jan 30 18:33:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:33:45 crc kubenswrapper[4712]: > Jan 30 18:33:46 crc kubenswrapper[4712]: I0130 18:33:46.258287 4712 generic.go:334] "Generic (PLEG): container finished" podID="2bea2e47-1aef-4872-9829-928e13c48c04" containerID="87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2" exitCode=0 Jan 30 18:33:46 crc kubenswrapper[4712]: I0130 18:33:46.258333 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerDied","Data":"87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2"} Jan 30 18:33:46 crc kubenswrapper[4712]: I0130 18:33:46.258359 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerStarted","Data":"411e8be383f34fb71a1ef644e7b089a73c958d5e5956f9361e51f50358eebb5f"} Jan 30 18:33:47 crc kubenswrapper[4712]: I0130 18:33:47.268078 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerStarted","Data":"66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371"} Jan 30 18:33:49 crc kubenswrapper[4712]: I0130 18:33:49.285963 4712 generic.go:334] "Generic (PLEG): container finished" podID="2bea2e47-1aef-4872-9829-928e13c48c04" containerID="66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371" exitCode=0 Jan 30 18:33:49 crc kubenswrapper[4712]: I0130 18:33:49.286056 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerDied","Data":"66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371"} Jan 30 18:33:50 crc kubenswrapper[4712]: I0130 18:33:50.297923 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerStarted","Data":"69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5"} Jan 30 18:33:50 crc kubenswrapper[4712]: I0130 18:33:50.324762 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dqs6f" podStartSLOduration=2.700942201 podStartE2EDuration="6.324739662s" podCreationTimestamp="2026-01-30 18:33:44 +0000 UTC" firstStartedPulling="2026-01-30 18:33:46.259861406 +0000 UTC m=+5963.166870875" lastFinishedPulling="2026-01-30 18:33:49.883658867 +0000 UTC m=+5966.790668336" observedRunningTime="2026-01-30 18:33:50.313149619 +0000 UTC m=+5967.220159088" watchObservedRunningTime="2026-01-30 18:33:50.324739662 +0000 UTC m=+5967.231749131" Jan 30 18:33:53 crc kubenswrapper[4712]: I0130 18:33:53.806099 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:33:53 crc kubenswrapper[4712]: E0130 18:33:53.806862 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:33:54 crc kubenswrapper[4712]: I0130 18:33:54.772125 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:54 crc kubenswrapper[4712]: I0130 18:33:54.772230 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:33:54 crc kubenswrapper[4712]: I0130 18:33:54.913358 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:54 crc kubenswrapper[4712]: I0130 18:33:54.978703 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:55 crc kubenswrapper[4712]: I0130 18:33:55.161653 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tcr2n"] Jan 30 18:33:55 crc kubenswrapper[4712]: I0130 18:33:55.831467 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-dqs6f" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="registry-server" probeResult="failure" output=< Jan 30 18:33:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:33:55 crc kubenswrapper[4712]: > Jan 30 18:33:56 crc kubenswrapper[4712]: I0130 18:33:56.347054 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tcr2n" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="registry-server" containerID="cri-o://ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b" gracePeriod=2 Jan 30 18:33:56 crc kubenswrapper[4712]: I0130 18:33:56.958149 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.037513 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-utilities\") pod \"901d6339-5954-4ecd-8202-dba7b7e2873f\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.037824 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-catalog-content\") pod \"901d6339-5954-4ecd-8202-dba7b7e2873f\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.038039 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cb8hm\" (UniqueName: \"kubernetes.io/projected/901d6339-5954-4ecd-8202-dba7b7e2873f-kube-api-access-cb8hm\") pod \"901d6339-5954-4ecd-8202-dba7b7e2873f\" (UID: \"901d6339-5954-4ecd-8202-dba7b7e2873f\") " Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.038982 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-utilities" (OuterVolumeSpecName: "utilities") pod "901d6339-5954-4ecd-8202-dba7b7e2873f" (UID: "901d6339-5954-4ecd-8202-dba7b7e2873f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.055129 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/901d6339-5954-4ecd-8202-dba7b7e2873f-kube-api-access-cb8hm" (OuterVolumeSpecName: "kube-api-access-cb8hm") pod "901d6339-5954-4ecd-8202-dba7b7e2873f" (UID: "901d6339-5954-4ecd-8202-dba7b7e2873f"). InnerVolumeSpecName "kube-api-access-cb8hm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.066222 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "901d6339-5954-4ecd-8202-dba7b7e2873f" (UID: "901d6339-5954-4ecd-8202-dba7b7e2873f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.140923 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.141137 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/901d6339-5954-4ecd-8202-dba7b7e2873f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.141199 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cb8hm\" (UniqueName: \"kubernetes.io/projected/901d6339-5954-4ecd-8202-dba7b7e2873f-kube-api-access-cb8hm\") on node \"crc\" DevicePath \"\"" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.362555 4712 generic.go:334] "Generic (PLEG): container finished" podID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerID="ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b" exitCode=0 Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.362611 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerDied","Data":"ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b"} Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.362646 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tcr2n" event={"ID":"901d6339-5954-4ecd-8202-dba7b7e2873f","Type":"ContainerDied","Data":"fa6054ee2fecf83517f52fd4223865a94208bf3374ea519d87cc113335242a9f"} Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.362666 4712 scope.go:117] "RemoveContainer" containerID="ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.362717 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tcr2n" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.393197 4712 scope.go:117] "RemoveContainer" containerID="404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.416619 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tcr2n"] Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.426144 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tcr2n"] Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.430225 4712 scope.go:117] "RemoveContainer" containerID="5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.486119 4712 scope.go:117] "RemoveContainer" containerID="ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b" Jan 30 18:33:57 crc kubenswrapper[4712]: E0130 18:33:57.487242 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b\": container with ID starting with ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b not found: ID does not exist" containerID="ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.487331 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b"} err="failed to get container status \"ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b\": rpc error: code = NotFound desc = could not find container \"ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b\": container with ID starting with ab35cc34e4459b068c8b02aa59846922e5cf42321f1e39337eedd67b5d07417b not found: ID does not exist" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.487358 4712 scope.go:117] "RemoveContainer" containerID="404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f" Jan 30 18:33:57 crc kubenswrapper[4712]: E0130 18:33:57.487612 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f\": container with ID starting with 404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f not found: ID does not exist" containerID="404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.487643 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f"} err="failed to get container status \"404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f\": rpc error: code = NotFound desc = could not find container \"404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f\": container with ID starting with 404224d240bf6780252e7f0a94c495d454ef592312f0f23a72e79bed8376b65f not found: ID does not exist" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.487663 4712 scope.go:117] "RemoveContainer" containerID="5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51" Jan 30 18:33:57 crc kubenswrapper[4712]: E0130 18:33:57.488128 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51\": container with ID starting with 5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51 not found: ID does not exist" containerID="5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.488158 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51"} err="failed to get container status \"5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51\": rpc error: code = NotFound desc = could not find container \"5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51\": container with ID starting with 5c0c3ac3d9ff9399c626292348bbefcd92b7ac35b30201036bd0682361344a51 not found: ID does not exist" Jan 30 18:33:57 crc kubenswrapper[4712]: I0130 18:33:57.818565 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" path="/var/lib/kubelet/pods/901d6339-5954-4ecd-8202-dba7b7e2873f/volumes" Jan 30 18:34:04 crc kubenswrapper[4712]: I0130 18:34:04.861419 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:34:04 crc kubenswrapper[4712]: I0130 18:34:04.916388 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:34:05 crc kubenswrapper[4712]: I0130 18:34:05.710228 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqs6f"] Jan 30 18:34:06 crc kubenswrapper[4712]: I0130 18:34:06.481231 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dqs6f" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="registry-server" containerID="cri-o://69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5" gracePeriod=2 Jan 30 18:34:06 crc kubenswrapper[4712]: I0130 18:34:06.978171 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.077439 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-utilities\") pod \"2bea2e47-1aef-4872-9829-928e13c48c04\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.077655 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vw5x2\" (UniqueName: \"kubernetes.io/projected/2bea2e47-1aef-4872-9829-928e13c48c04-kube-api-access-vw5x2\") pod \"2bea2e47-1aef-4872-9829-928e13c48c04\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.077752 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-catalog-content\") pod \"2bea2e47-1aef-4872-9829-928e13c48c04\" (UID: \"2bea2e47-1aef-4872-9829-928e13c48c04\") " Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.078791 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-utilities" (OuterVolumeSpecName: "utilities") pod "2bea2e47-1aef-4872-9829-928e13c48c04" (UID: "2bea2e47-1aef-4872-9829-928e13c48c04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.083698 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bea2e47-1aef-4872-9829-928e13c48c04-kube-api-access-vw5x2" (OuterVolumeSpecName: "kube-api-access-vw5x2") pod "2bea2e47-1aef-4872-9829-928e13c48c04" (UID: "2bea2e47-1aef-4872-9829-928e13c48c04"). InnerVolumeSpecName "kube-api-access-vw5x2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.127671 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2bea2e47-1aef-4872-9829-928e13c48c04" (UID: "2bea2e47-1aef-4872-9829-928e13c48c04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.179861 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vw5x2\" (UniqueName: \"kubernetes.io/projected/2bea2e47-1aef-4872-9829-928e13c48c04-kube-api-access-vw5x2\") on node \"crc\" DevicePath \"\"" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.179912 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.179923 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2bea2e47-1aef-4872-9829-928e13c48c04-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.493927 4712 generic.go:334] "Generic (PLEG): container finished" podID="2bea2e47-1aef-4872-9829-928e13c48c04" containerID="69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5" exitCode=0 Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.493971 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerDied","Data":"69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5"} Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.493996 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dqs6f" event={"ID":"2bea2e47-1aef-4872-9829-928e13c48c04","Type":"ContainerDied","Data":"411e8be383f34fb71a1ef644e7b089a73c958d5e5956f9361e51f50358eebb5f"} Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.494013 4712 scope.go:117] "RemoveContainer" containerID="69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.494132 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dqs6f" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.530613 4712 scope.go:117] "RemoveContainer" containerID="66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.535748 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dqs6f"] Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.546634 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dqs6f"] Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.553738 4712 scope.go:117] "RemoveContainer" containerID="87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.615347 4712 scope.go:117] "RemoveContainer" containerID="69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5" Jan 30 18:34:07 crc kubenswrapper[4712]: E0130 18:34:07.615968 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5\": container with ID starting with 69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5 not found: ID does not exist" containerID="69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.616062 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5"} err="failed to get container status \"69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5\": rpc error: code = NotFound desc = could not find container \"69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5\": container with ID starting with 69a6d3fc51f4be74d40126fc64357720f89768e4af2d44ee9e552aff0e6448e5 not found: ID does not exist" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.616157 4712 scope.go:117] "RemoveContainer" containerID="66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371" Jan 30 18:34:07 crc kubenswrapper[4712]: E0130 18:34:07.616838 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371\": container with ID starting with 66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371 not found: ID does not exist" containerID="66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.616951 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371"} err="failed to get container status \"66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371\": rpc error: code = NotFound desc = could not find container \"66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371\": container with ID starting with 66ba2c26c42822a0cd4f33c5f4b46c728ae0505bab7a53f91f33bf1315cfa371 not found: ID does not exist" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.617027 4712 scope.go:117] "RemoveContainer" containerID="87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2" Jan 30 18:34:07 crc kubenswrapper[4712]: E0130 18:34:07.620042 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2\": container with ID starting with 87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2 not found: ID does not exist" containerID="87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.620395 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2"} err="failed to get container status \"87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2\": rpc error: code = NotFound desc = could not find container \"87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2\": container with ID starting with 87032e0586b61df16cf3e9a4b99741326b2fddeb4007e53f4e88f820c84af1d2 not found: ID does not exist" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.801580 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:34:07 crc kubenswrapper[4712]: E0130 18:34:07.801822 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:34:07 crc kubenswrapper[4712]: I0130 18:34:07.810654 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" path="/var/lib/kubelet/pods/2bea2e47-1aef-4872-9829-928e13c48c04/volumes" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.159632 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jtzjc"] Jan 30 18:34:10 crc kubenswrapper[4712]: E0130 18:34:10.160559 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="extract-content" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160573 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="extract-content" Jan 30 18:34:10 crc kubenswrapper[4712]: E0130 18:34:10.160589 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="extract-utilities" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160597 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="extract-utilities" Jan 30 18:34:10 crc kubenswrapper[4712]: E0130 18:34:10.160615 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="registry-server" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160620 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="registry-server" Jan 30 18:34:10 crc kubenswrapper[4712]: E0130 18:34:10.160631 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="registry-server" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160637 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="registry-server" Jan 30 18:34:10 crc kubenswrapper[4712]: E0130 18:34:10.160645 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="extract-content" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160652 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="extract-content" Jan 30 18:34:10 crc kubenswrapper[4712]: E0130 18:34:10.160664 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="extract-utilities" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160670 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="extract-utilities" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160863 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bea2e47-1aef-4872-9829-928e13c48c04" containerName="registry-server" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.160882 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="901d6339-5954-4ecd-8202-dba7b7e2873f" containerName="registry-server" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.162117 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.183214 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtzjc"] Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.234369 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-utilities\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.234554 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7bfs\" (UniqueName: \"kubernetes.io/projected/c8117457-8b50-47e3-adab-2d674dae69e4-kube-api-access-k7bfs\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.234599 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-catalog-content\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.336526 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-utilities\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.336894 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7bfs\" (UniqueName: \"kubernetes.io/projected/c8117457-8b50-47e3-adab-2d674dae69e4-kube-api-access-k7bfs\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.337034 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-catalog-content\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.337066 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-utilities\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.337271 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-catalog-content\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.363142 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7bfs\" (UniqueName: \"kubernetes.io/projected/c8117457-8b50-47e3-adab-2d674dae69e4-kube-api-access-k7bfs\") pod \"certified-operators-jtzjc\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:10 crc kubenswrapper[4712]: I0130 18:34:10.483857 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:11 crc kubenswrapper[4712]: I0130 18:34:11.143657 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jtzjc"] Jan 30 18:34:11 crc kubenswrapper[4712]: I0130 18:34:11.527548 4712 generic.go:334] "Generic (PLEG): container finished" podID="c8117457-8b50-47e3-adab-2d674dae69e4" containerID="58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324" exitCode=0 Jan 30 18:34:11 crc kubenswrapper[4712]: I0130 18:34:11.527618 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerDied","Data":"58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324"} Jan 30 18:34:11 crc kubenswrapper[4712]: I0130 18:34:11.527908 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerStarted","Data":"5fd5c52cbccaf30ae1429b812dcda77fc066f4cb45bea8ebff2ce8fd9459f9c4"} Jan 30 18:34:12 crc kubenswrapper[4712]: I0130 18:34:12.539041 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerStarted","Data":"b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9"} Jan 30 18:34:14 crc kubenswrapper[4712]: I0130 18:34:14.562136 4712 generic.go:334] "Generic (PLEG): container finished" podID="c8117457-8b50-47e3-adab-2d674dae69e4" containerID="b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9" exitCode=0 Jan 30 18:34:14 crc kubenswrapper[4712]: I0130 18:34:14.562338 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerDied","Data":"b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9"} Jan 30 18:34:15 crc kubenswrapper[4712]: I0130 18:34:15.573540 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerStarted","Data":"8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc"} Jan 30 18:34:15 crc kubenswrapper[4712]: I0130 18:34:15.599260 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jtzjc" podStartSLOduration=2.137512032 podStartE2EDuration="5.599243366s" podCreationTimestamp="2026-01-30 18:34:10 +0000 UTC" firstStartedPulling="2026-01-30 18:34:11.52975708 +0000 UTC m=+5988.436766549" lastFinishedPulling="2026-01-30 18:34:14.991488414 +0000 UTC m=+5991.898497883" observedRunningTime="2026-01-30 18:34:15.593461166 +0000 UTC m=+5992.500470635" watchObservedRunningTime="2026-01-30 18:34:15.599243366 +0000 UTC m=+5992.506252835" Jan 30 18:34:20 crc kubenswrapper[4712]: I0130 18:34:20.483984 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:20 crc kubenswrapper[4712]: I0130 18:34:20.484538 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:21 crc kubenswrapper[4712]: I0130 18:34:21.544131 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-jtzjc" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="registry-server" probeResult="failure" output=< Jan 30 18:34:21 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:34:21 crc kubenswrapper[4712]: > Jan 30 18:34:22 crc kubenswrapper[4712]: I0130 18:34:22.800462 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:34:22 crc kubenswrapper[4712]: E0130 18:34:22.801980 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:34:30 crc kubenswrapper[4712]: I0130 18:34:30.563884 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:30 crc kubenswrapper[4712]: I0130 18:34:30.631603 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:30 crc kubenswrapper[4712]: I0130 18:34:30.812493 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtzjc"] Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.008241 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jtzjc" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="registry-server" containerID="cri-o://8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc" gracePeriod=2 Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.509255 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.683596 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-utilities\") pod \"c8117457-8b50-47e3-adab-2d674dae69e4\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.683994 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-catalog-content\") pod \"c8117457-8b50-47e3-adab-2d674dae69e4\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.684043 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7bfs\" (UniqueName: \"kubernetes.io/projected/c8117457-8b50-47e3-adab-2d674dae69e4-kube-api-access-k7bfs\") pod \"c8117457-8b50-47e3-adab-2d674dae69e4\" (UID: \"c8117457-8b50-47e3-adab-2d674dae69e4\") " Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.684429 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-utilities" (OuterVolumeSpecName: "utilities") pod "c8117457-8b50-47e3-adab-2d674dae69e4" (UID: "c8117457-8b50-47e3-adab-2d674dae69e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.684672 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.693058 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8117457-8b50-47e3-adab-2d674dae69e4-kube-api-access-k7bfs" (OuterVolumeSpecName: "kube-api-access-k7bfs") pod "c8117457-8b50-47e3-adab-2d674dae69e4" (UID: "c8117457-8b50-47e3-adab-2d674dae69e4"). InnerVolumeSpecName "kube-api-access-k7bfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.756643 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8117457-8b50-47e3-adab-2d674dae69e4" (UID: "c8117457-8b50-47e3-adab-2d674dae69e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.786337 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8117457-8b50-47e3-adab-2d674dae69e4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:34:32 crc kubenswrapper[4712]: I0130 18:34:32.786377 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7bfs\" (UniqueName: \"kubernetes.io/projected/c8117457-8b50-47e3-adab-2d674dae69e4-kube-api-access-k7bfs\") on node \"crc\" DevicePath \"\"" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.021296 4712 generic.go:334] "Generic (PLEG): container finished" podID="c8117457-8b50-47e3-adab-2d674dae69e4" containerID="8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc" exitCode=0 Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.021381 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jtzjc" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.021385 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerDied","Data":"8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc"} Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.022627 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jtzjc" event={"ID":"c8117457-8b50-47e3-adab-2d674dae69e4","Type":"ContainerDied","Data":"5fd5c52cbccaf30ae1429b812dcda77fc066f4cb45bea8ebff2ce8fd9459f9c4"} Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.022651 4712 scope.go:117] "RemoveContainer" containerID="8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.057229 4712 scope.go:117] "RemoveContainer" containerID="b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.062023 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jtzjc"] Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.071124 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jtzjc"] Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.085221 4712 scope.go:117] "RemoveContainer" containerID="58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.129960 4712 scope.go:117] "RemoveContainer" containerID="8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc" Jan 30 18:34:33 crc kubenswrapper[4712]: E0130 18:34:33.130443 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc\": container with ID starting with 8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc not found: ID does not exist" containerID="8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.130559 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc"} err="failed to get container status \"8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc\": rpc error: code = NotFound desc = could not find container \"8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc\": container with ID starting with 8b3c777f641df259f2b45bbde9d9f4b69a32727e35bdb733bb039c7728e56bcc not found: ID does not exist" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.130683 4712 scope.go:117] "RemoveContainer" containerID="b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9" Jan 30 18:34:33 crc kubenswrapper[4712]: E0130 18:34:33.131107 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9\": container with ID starting with b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9 not found: ID does not exist" containerID="b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.131149 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9"} err="failed to get container status \"b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9\": rpc error: code = NotFound desc = could not find container \"b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9\": container with ID starting with b8694f273fbd9d3c1913a4a69bd13ecbe2e66360e7fbbc98629a4610a77e52b9 not found: ID does not exist" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.131176 4712 scope.go:117] "RemoveContainer" containerID="58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324" Jan 30 18:34:33 crc kubenswrapper[4712]: E0130 18:34:33.131477 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324\": container with ID starting with 58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324 not found: ID does not exist" containerID="58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.131505 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324"} err="failed to get container status \"58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324\": rpc error: code = NotFound desc = could not find container \"58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324\": container with ID starting with 58f3b4ecb7f930fe1468edc9a2c6a4ea5fee928d35a59cfabce983f0a83a8324 not found: ID does not exist" Jan 30 18:34:33 crc kubenswrapper[4712]: I0130 18:34:33.820837 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" path="/var/lib/kubelet/pods/c8117457-8b50-47e3-adab-2d674dae69e4/volumes" Jan 30 18:34:35 crc kubenswrapper[4712]: I0130 18:34:35.800190 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:34:35 crc kubenswrapper[4712]: E0130 18:34:35.801350 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:34:49 crc kubenswrapper[4712]: I0130 18:34:49.801685 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:34:49 crc kubenswrapper[4712]: E0130 18:34:49.802486 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:35:04 crc kubenswrapper[4712]: I0130 18:35:04.800240 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:35:04 crc kubenswrapper[4712]: E0130 18:35:04.801108 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:35:18 crc kubenswrapper[4712]: I0130 18:35:18.800141 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:35:18 crc kubenswrapper[4712]: E0130 18:35:18.801051 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:35:33 crc kubenswrapper[4712]: I0130 18:35:33.808309 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:35:33 crc kubenswrapper[4712]: E0130 18:35:33.809310 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:35:48 crc kubenswrapper[4712]: I0130 18:35:48.800642 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:35:49 crc kubenswrapper[4712]: I0130 18:35:49.810155 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"9b2819ffe3e76ed4f9abb055155e9b9d939b960a1201c768141fbdb99412790f"} Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.009585 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hctp7"] Jan 30 18:37:23 crc kubenswrapper[4712]: E0130 18:37:23.010780 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="extract-content" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.010832 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="extract-content" Jan 30 18:37:23 crc kubenswrapper[4712]: E0130 18:37:23.010891 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="registry-server" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.010905 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="registry-server" Jan 30 18:37:23 crc kubenswrapper[4712]: E0130 18:37:23.010951 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="extract-utilities" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.010965 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="extract-utilities" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.011310 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8117457-8b50-47e3-adab-2d674dae69e4" containerName="registry-server" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.013613 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.043415 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hctp7"] Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.055809 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbw29\" (UniqueName: \"kubernetes.io/projected/eda2e9d2-b147-4c9e-9cb4-86512a071cba-kube-api-access-sbw29\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.055862 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-utilities\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.055898 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-catalog-content\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.191966 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-catalog-content\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.192847 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbw29\" (UniqueName: \"kubernetes.io/projected/eda2e9d2-b147-4c9e-9cb4-86512a071cba-kube-api-access-sbw29\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.192952 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-utilities\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.193344 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-catalog-content\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.193522 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-utilities\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.218572 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbw29\" (UniqueName: \"kubernetes.io/projected/eda2e9d2-b147-4c9e-9cb4-86512a071cba-kube-api-access-sbw29\") pod \"redhat-operators-hctp7\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.343291 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:23 crc kubenswrapper[4712]: I0130 18:37:23.834048 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hctp7"] Jan 30 18:37:24 crc kubenswrapper[4712]: I0130 18:37:24.768167 4712 generic.go:334] "Generic (PLEG): container finished" podID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerID="438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9" exitCode=0 Jan 30 18:37:24 crc kubenswrapper[4712]: I0130 18:37:24.768380 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerDied","Data":"438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9"} Jan 30 18:37:24 crc kubenswrapper[4712]: I0130 18:37:24.768491 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerStarted","Data":"61f2fc702096c6fd1396a90edb98e377a927bd28d49cbfcbf654a8a5f87a9d03"} Jan 30 18:37:25 crc kubenswrapper[4712]: I0130 18:37:25.778535 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerStarted","Data":"8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69"} Jan 30 18:37:32 crc kubenswrapper[4712]: I0130 18:37:32.855789 4712 generic.go:334] "Generic (PLEG): container finished" podID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerID="8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69" exitCode=0 Jan 30 18:37:32 crc kubenswrapper[4712]: I0130 18:37:32.855938 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerDied","Data":"8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69"} Jan 30 18:37:33 crc kubenswrapper[4712]: I0130 18:37:33.868577 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerStarted","Data":"96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f"} Jan 30 18:37:33 crc kubenswrapper[4712]: I0130 18:37:33.897119 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hctp7" podStartSLOduration=3.339588683 podStartE2EDuration="11.897096234s" podCreationTimestamp="2026-01-30 18:37:22 +0000 UTC" firstStartedPulling="2026-01-30 18:37:24.772550742 +0000 UTC m=+6181.679560211" lastFinishedPulling="2026-01-30 18:37:33.330058293 +0000 UTC m=+6190.237067762" observedRunningTime="2026-01-30 18:37:33.893865866 +0000 UTC m=+6190.800875335" watchObservedRunningTime="2026-01-30 18:37:33.897096234 +0000 UTC m=+6190.804105723" Jan 30 18:37:43 crc kubenswrapper[4712]: I0130 18:37:43.343973 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:43 crc kubenswrapper[4712]: I0130 18:37:43.344642 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:37:44 crc kubenswrapper[4712]: I0130 18:37:44.401236 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hctp7" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" probeResult="failure" output=< Jan 30 18:37:44 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:37:44 crc kubenswrapper[4712]: > Jan 30 18:37:54 crc kubenswrapper[4712]: I0130 18:37:54.391229 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hctp7" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" probeResult="failure" output=< Jan 30 18:37:54 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:37:54 crc kubenswrapper[4712]: > Jan 30 18:38:04 crc kubenswrapper[4712]: I0130 18:38:04.392956 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hctp7" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" probeResult="failure" output=< Jan 30 18:38:04 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:38:04 crc kubenswrapper[4712]: > Jan 30 18:38:06 crc kubenswrapper[4712]: I0130 18:38:06.271404 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:38:06 crc kubenswrapper[4712]: I0130 18:38:06.271488 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:38:14 crc kubenswrapper[4712]: I0130 18:38:14.397146 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hctp7" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" probeResult="failure" output=< Jan 30 18:38:14 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:38:14 crc kubenswrapper[4712]: > Jan 30 18:38:23 crc kubenswrapper[4712]: I0130 18:38:23.394860 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:38:23 crc kubenswrapper[4712]: I0130 18:38:23.449381 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:38:24 crc kubenswrapper[4712]: I0130 18:38:24.379942 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hctp7"] Jan 30 18:38:25 crc kubenswrapper[4712]: I0130 18:38:25.393815 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hctp7" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" containerID="cri-o://96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f" gracePeriod=2 Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.275907 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.380026 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbw29\" (UniqueName: \"kubernetes.io/projected/eda2e9d2-b147-4c9e-9cb4-86512a071cba-kube-api-access-sbw29\") pod \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.380288 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-catalog-content\") pod \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.380327 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-utilities\") pod \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\" (UID: \"eda2e9d2-b147-4c9e-9cb4-86512a071cba\") " Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.380921 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-utilities" (OuterVolumeSpecName: "utilities") pod "eda2e9d2-b147-4c9e-9cb4-86512a071cba" (UID: "eda2e9d2-b147-4c9e-9cb4-86512a071cba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.387592 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eda2e9d2-b147-4c9e-9cb4-86512a071cba-kube-api-access-sbw29" (OuterVolumeSpecName: "kube-api-access-sbw29") pod "eda2e9d2-b147-4c9e-9cb4-86512a071cba" (UID: "eda2e9d2-b147-4c9e-9cb4-86512a071cba"). InnerVolumeSpecName "kube-api-access-sbw29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.405095 4712 generic.go:334] "Generic (PLEG): container finished" podID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerID="96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f" exitCode=0 Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.405169 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerDied","Data":"96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f"} Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.405201 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hctp7" event={"ID":"eda2e9d2-b147-4c9e-9cb4-86512a071cba","Type":"ContainerDied","Data":"61f2fc702096c6fd1396a90edb98e377a927bd28d49cbfcbf654a8a5f87a9d03"} Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.405223 4712 scope.go:117] "RemoveContainer" containerID="96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.405240 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hctp7" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.467918 4712 scope.go:117] "RemoveContainer" containerID="8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.484365 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.484396 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sbw29\" (UniqueName: \"kubernetes.io/projected/eda2e9d2-b147-4c9e-9cb4-86512a071cba-kube-api-access-sbw29\") on node \"crc\" DevicePath \"\"" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.488992 4712 scope.go:117] "RemoveContainer" containerID="438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.543938 4712 scope.go:117] "RemoveContainer" containerID="96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f" Jan 30 18:38:26 crc kubenswrapper[4712]: E0130 18:38:26.544490 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f\": container with ID starting with 96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f not found: ID does not exist" containerID="96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.544530 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f"} err="failed to get container status \"96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f\": rpc error: code = NotFound desc = could not find container \"96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f\": container with ID starting with 96b1607d230ce0b6c656af870ceed2bc834fa75e512a7ba41b04d1b81115981f not found: ID does not exist" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.544571 4712 scope.go:117] "RemoveContainer" containerID="8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69" Jan 30 18:38:26 crc kubenswrapper[4712]: E0130 18:38:26.544863 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69\": container with ID starting with 8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69 not found: ID does not exist" containerID="8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.544882 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69"} err="failed to get container status \"8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69\": rpc error: code = NotFound desc = could not find container \"8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69\": container with ID starting with 8d41ad79ca4a3b2a03ecb833d583d748d31a6645e677e1f8749120bbfc855a69 not found: ID does not exist" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.544911 4712 scope.go:117] "RemoveContainer" containerID="438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9" Jan 30 18:38:26 crc kubenswrapper[4712]: E0130 18:38:26.545252 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9\": container with ID starting with 438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9 not found: ID does not exist" containerID="438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.545301 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9"} err="failed to get container status \"438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9\": rpc error: code = NotFound desc = could not find container \"438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9\": container with ID starting with 438badd86286e5fd866ac1932dbc18308fffd5059459dcfe589b2be21175bdd9 not found: ID does not exist" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.545513 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eda2e9d2-b147-4c9e-9cb4-86512a071cba" (UID: "eda2e9d2-b147-4c9e-9cb4-86512a071cba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.585965 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eda2e9d2-b147-4c9e-9cb4-86512a071cba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.743230 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hctp7"] Jan 30 18:38:26 crc kubenswrapper[4712]: I0130 18:38:26.750633 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hctp7"] Jan 30 18:38:27 crc kubenswrapper[4712]: I0130 18:38:27.814444 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" path="/var/lib/kubelet/pods/eda2e9d2-b147-4c9e-9cb4-86512a071cba/volumes" Jan 30 18:38:36 crc kubenswrapper[4712]: I0130 18:38:36.271238 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:38:36 crc kubenswrapper[4712]: I0130 18:38:36.271705 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.270861 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.271596 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.271663 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.272510 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9b2819ffe3e76ed4f9abb055155e9b9d939b960a1201c768141fbdb99412790f"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.272579 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://9b2819ffe3e76ed4f9abb055155e9b9d939b960a1201c768141fbdb99412790f" gracePeriod=600 Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.811494 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="9b2819ffe3e76ed4f9abb055155e9b9d939b960a1201c768141fbdb99412790f" exitCode=0 Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.811556 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"9b2819ffe3e76ed4f9abb055155e9b9d939b960a1201c768141fbdb99412790f"} Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.811964 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb"} Jan 30 18:39:06 crc kubenswrapper[4712]: I0130 18:39:06.811989 4712 scope.go:117] "RemoveContainer" containerID="6c298cf76a16e19be8ed7e8074e679a0563a2056c2bd7cd9261263da6571eb23" Jan 30 18:41:06 crc kubenswrapper[4712]: I0130 18:41:06.271957 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:41:06 crc kubenswrapper[4712]: I0130 18:41:06.272641 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:41:36 crc kubenswrapper[4712]: I0130 18:41:36.270930 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:41:36 crc kubenswrapper[4712]: I0130 18:41:36.273079 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.271179 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.271994 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.272044 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.272942 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.273008 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" gracePeriod=600 Jan 30 18:42:06 crc kubenswrapper[4712]: E0130 18:42:06.404398 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.596452 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" exitCode=0 Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.596505 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb"} Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.596546 4712 scope.go:117] "RemoveContainer" containerID="9b2819ffe3e76ed4f9abb055155e9b9d939b960a1201c768141fbdb99412790f" Jan 30 18:42:06 crc kubenswrapper[4712]: I0130 18:42:06.597824 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:42:06 crc kubenswrapper[4712]: E0130 18:42:06.599094 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:42:16 crc kubenswrapper[4712]: I0130 18:42:16.799833 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:42:16 crc kubenswrapper[4712]: E0130 18:42:16.800426 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:42:29 crc kubenswrapper[4712]: I0130 18:42:29.799600 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:42:29 crc kubenswrapper[4712]: E0130 18:42:29.800412 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:42:41 crc kubenswrapper[4712]: I0130 18:42:41.800584 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:42:41 crc kubenswrapper[4712]: E0130 18:42:41.801263 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:42:53 crc kubenswrapper[4712]: I0130 18:42:53.806387 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:42:53 crc kubenswrapper[4712]: E0130 18:42:53.807170 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:43:08 crc kubenswrapper[4712]: I0130 18:43:08.799812 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:43:08 crc kubenswrapper[4712]: E0130 18:43:08.800537 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:43:20 crc kubenswrapper[4712]: I0130 18:43:20.800328 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:43:20 crc kubenswrapper[4712]: E0130 18:43:20.801698 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:43:32 crc kubenswrapper[4712]: I0130 18:43:32.799727 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:43:32 crc kubenswrapper[4712]: E0130 18:43:32.800898 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:43:45 crc kubenswrapper[4712]: I0130 18:43:45.800831 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:43:45 crc kubenswrapper[4712]: E0130 18:43:45.801617 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:43:49 crc kubenswrapper[4712]: I0130 18:43:49.992043 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zgftd"] Jan 30 18:43:49 crc kubenswrapper[4712]: E0130 18:43:49.993981 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="extract-utilities" Jan 30 18:43:49 crc kubenswrapper[4712]: I0130 18:43:49.994028 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="extract-utilities" Jan 30 18:43:49 crc kubenswrapper[4712]: E0130 18:43:49.994090 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="extract-content" Jan 30 18:43:49 crc kubenswrapper[4712]: I0130 18:43:49.994108 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="extract-content" Jan 30 18:43:49 crc kubenswrapper[4712]: E0130 18:43:49.994347 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" Jan 30 18:43:49 crc kubenswrapper[4712]: I0130 18:43:49.994368 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" Jan 30 18:43:49 crc kubenswrapper[4712]: I0130 18:43:49.995762 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="eda2e9d2-b147-4c9e-9cb4-86512a071cba" containerName="registry-server" Jan 30 18:43:49 crc kubenswrapper[4712]: I0130 18:43:49.997587 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.008768 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgftd"] Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.080169 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-catalog-content\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.080256 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkvh7\" (UniqueName: \"kubernetes.io/projected/282f00b7-2777-44ec-85fc-9751a6148a99-kube-api-access-nkvh7\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.080291 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-utilities\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.181915 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-utilities\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.182516 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-catalog-content\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.182589 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkvh7\" (UniqueName: \"kubernetes.io/projected/282f00b7-2777-44ec-85fc-9751a6148a99-kube-api-access-nkvh7\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.182511 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-utilities\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.182730 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-catalog-content\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.225777 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkvh7\" (UniqueName: \"kubernetes.io/projected/282f00b7-2777-44ec-85fc-9751a6148a99-kube-api-access-nkvh7\") pod \"redhat-marketplace-zgftd\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.315313 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:43:50 crc kubenswrapper[4712]: I0130 18:43:50.799545 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgftd"] Jan 30 18:43:51 crc kubenswrapper[4712]: I0130 18:43:51.711550 4712 generic.go:334] "Generic (PLEG): container finished" podID="282f00b7-2777-44ec-85fc-9751a6148a99" containerID="85a2656b299924ba8a3f5a6474daef1793629720b501ef03687b4a4dca811927" exitCode=0 Jan 30 18:43:51 crc kubenswrapper[4712]: I0130 18:43:51.711712 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerDied","Data":"85a2656b299924ba8a3f5a6474daef1793629720b501ef03687b4a4dca811927"} Jan 30 18:43:51 crc kubenswrapper[4712]: I0130 18:43:51.711909 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerStarted","Data":"fff11244eee1f9503c2be393a4a2ece9385e42eea665878b862f71c7d11fe024"} Jan 30 18:43:51 crc kubenswrapper[4712]: I0130 18:43:51.714204 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:43:53 crc kubenswrapper[4712]: I0130 18:43:53.730127 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerStarted","Data":"d915e1e51dcc1b6e73154778ed59701e3b9f431df64fec50a0f30da026e95dc4"} Jan 30 18:43:55 crc kubenswrapper[4712]: I0130 18:43:55.759926 4712 generic.go:334] "Generic (PLEG): container finished" podID="282f00b7-2777-44ec-85fc-9751a6148a99" containerID="d915e1e51dcc1b6e73154778ed59701e3b9f431df64fec50a0f30da026e95dc4" exitCode=0 Jan 30 18:43:55 crc kubenswrapper[4712]: I0130 18:43:55.760007 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerDied","Data":"d915e1e51dcc1b6e73154778ed59701e3b9f431df64fec50a0f30da026e95dc4"} Jan 30 18:43:57 crc kubenswrapper[4712]: I0130 18:43:57.795325 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerStarted","Data":"74ae7217c63a301220bc4dcaf5a5d710d9f79a5d3daceaa6e206f1e85ec2e9c4"} Jan 30 18:43:57 crc kubenswrapper[4712]: I0130 18:43:57.799984 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:43:57 crc kubenswrapper[4712]: E0130 18:43:57.800266 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:43:57 crc kubenswrapper[4712]: I0130 18:43:57.819930 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zgftd" podStartSLOduration=3.579818505 podStartE2EDuration="8.819909393s" podCreationTimestamp="2026-01-30 18:43:49 +0000 UTC" firstStartedPulling="2026-01-30 18:43:51.71390252 +0000 UTC m=+6568.620911989" lastFinishedPulling="2026-01-30 18:43:56.953993398 +0000 UTC m=+6573.861002877" observedRunningTime="2026-01-30 18:43:57.815585778 +0000 UTC m=+6574.722595247" watchObservedRunningTime="2026-01-30 18:43:57.819909393 +0000 UTC m=+6574.726918852" Jan 30 18:43:58 crc kubenswrapper[4712]: I0130 18:43:58.865442 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-v7pck"] Jan 30 18:43:58 crc kubenswrapper[4712]: I0130 18:43:58.867491 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:58 crc kubenswrapper[4712]: I0130 18:43:58.887740 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7pck"] Jan 30 18:43:58 crc kubenswrapper[4712]: I0130 18:43:58.957264 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sg9j\" (UniqueName: \"kubernetes.io/projected/99347680-bc5f-4a15-973c-c6474d4fba98-kube-api-access-2sg9j\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:58 crc kubenswrapper[4712]: I0130 18:43:58.957382 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-utilities\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:58 crc kubenswrapper[4712]: I0130 18:43:58.957428 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-catalog-content\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.059563 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-utilities\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.059740 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-catalog-content\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.059910 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2sg9j\" (UniqueName: \"kubernetes.io/projected/99347680-bc5f-4a15-973c-c6474d4fba98-kube-api-access-2sg9j\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.060096 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-utilities\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.060242 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-catalog-content\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.081306 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sg9j\" (UniqueName: \"kubernetes.io/projected/99347680-bc5f-4a15-973c-c6474d4fba98-kube-api-access-2sg9j\") pod \"community-operators-v7pck\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:43:59 crc kubenswrapper[4712]: I0130 18:43:59.185384 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.315789 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.316372 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.317095 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-v7pck"] Jan 30 18:44:00 crc kubenswrapper[4712]: W0130 18:44:00.328031 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99347680_bc5f_4a15_973c_c6474d4fba98.slice/crio-2f53bf39b61a49b7b872a78ed7446b47066638a2806b37df016f5398983edd97 WatchSource:0}: Error finding container 2f53bf39b61a49b7b872a78ed7446b47066638a2806b37df016f5398983edd97: Status 404 returned error can't find the container with id 2f53bf39b61a49b7b872a78ed7446b47066638a2806b37df016f5398983edd97 Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.388644 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.833687 4712 generic.go:334] "Generic (PLEG): container finished" podID="99347680-bc5f-4a15-973c-c6474d4fba98" containerID="8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e" exitCode=0 Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.834041 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerDied","Data":"8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e"} Jan 30 18:44:00 crc kubenswrapper[4712]: I0130 18:44:00.834115 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerStarted","Data":"2f53bf39b61a49b7b872a78ed7446b47066638a2806b37df016f5398983edd97"} Jan 30 18:44:02 crc kubenswrapper[4712]: I0130 18:44:02.853727 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerStarted","Data":"ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7"} Jan 30 18:44:07 crc kubenswrapper[4712]: I0130 18:44:07.901369 4712 generic.go:334] "Generic (PLEG): container finished" podID="99347680-bc5f-4a15-973c-c6474d4fba98" containerID="ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7" exitCode=0 Jan 30 18:44:07 crc kubenswrapper[4712]: I0130 18:44:07.901414 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerDied","Data":"ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7"} Jan 30 18:44:07 crc kubenswrapper[4712]: E0130 18:44:07.908579 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99347680_bc5f_4a15_973c_c6474d4fba98.slice/crio-ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7.scope\": RecentStats: unable to find data in memory cache]" Jan 30 18:44:08 crc kubenswrapper[4712]: I0130 18:44:08.912857 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerStarted","Data":"e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649"} Jan 30 18:44:09 crc kubenswrapper[4712]: I0130 18:44:09.185657 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:09 crc kubenswrapper[4712]: I0130 18:44:09.186047 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:10 crc kubenswrapper[4712]: I0130 18:44:10.242975 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-v7pck" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="registry-server" probeResult="failure" output=< Jan 30 18:44:10 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:44:10 crc kubenswrapper[4712]: > Jan 30 18:44:10 crc kubenswrapper[4712]: I0130 18:44:10.364648 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:44:10 crc kubenswrapper[4712]: I0130 18:44:10.385578 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-v7pck" podStartSLOduration=4.810877548 podStartE2EDuration="12.385560288s" podCreationTimestamp="2026-01-30 18:43:58 +0000 UTC" firstStartedPulling="2026-01-30 18:44:00.836063503 +0000 UTC m=+6577.743072972" lastFinishedPulling="2026-01-30 18:44:08.410746243 +0000 UTC m=+6585.317755712" observedRunningTime="2026-01-30 18:44:08.941312976 +0000 UTC m=+6585.848322445" watchObservedRunningTime="2026-01-30 18:44:10.385560288 +0000 UTC m=+6587.292569757" Jan 30 18:44:10 crc kubenswrapper[4712]: I0130 18:44:10.429939 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgftd"] Jan 30 18:44:10 crc kubenswrapper[4712]: I0130 18:44:10.930463 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zgftd" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="registry-server" containerID="cri-o://74ae7217c63a301220bc4dcaf5a5d710d9f79a5d3daceaa6e206f1e85ec2e9c4" gracePeriod=2 Jan 30 18:44:11 crc kubenswrapper[4712]: I0130 18:44:11.800773 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:44:11 crc kubenswrapper[4712]: E0130 18:44:11.801853 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:44:11 crc kubenswrapper[4712]: I0130 18:44:11.941193 4712 generic.go:334] "Generic (PLEG): container finished" podID="282f00b7-2777-44ec-85fc-9751a6148a99" containerID="74ae7217c63a301220bc4dcaf5a5d710d9f79a5d3daceaa6e206f1e85ec2e9c4" exitCode=0 Jan 30 18:44:11 crc kubenswrapper[4712]: I0130 18:44:11.941231 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerDied","Data":"74ae7217c63a301220bc4dcaf5a5d710d9f79a5d3daceaa6e206f1e85ec2e9c4"} Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.074514 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.223335 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-utilities\") pod \"282f00b7-2777-44ec-85fc-9751a6148a99\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.223556 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkvh7\" (UniqueName: \"kubernetes.io/projected/282f00b7-2777-44ec-85fc-9751a6148a99-kube-api-access-nkvh7\") pod \"282f00b7-2777-44ec-85fc-9751a6148a99\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.223611 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-catalog-content\") pod \"282f00b7-2777-44ec-85fc-9751a6148a99\" (UID: \"282f00b7-2777-44ec-85fc-9751a6148a99\") " Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.227768 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-utilities" (OuterVolumeSpecName: "utilities") pod "282f00b7-2777-44ec-85fc-9751a6148a99" (UID: "282f00b7-2777-44ec-85fc-9751a6148a99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.262001 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "282f00b7-2777-44ec-85fc-9751a6148a99" (UID: "282f00b7-2777-44ec-85fc-9751a6148a99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.276030 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/282f00b7-2777-44ec-85fc-9751a6148a99-kube-api-access-nkvh7" (OuterVolumeSpecName: "kube-api-access-nkvh7") pod "282f00b7-2777-44ec-85fc-9751a6148a99" (UID: "282f00b7-2777-44ec-85fc-9751a6148a99"). InnerVolumeSpecName "kube-api-access-nkvh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.326185 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkvh7\" (UniqueName: \"kubernetes.io/projected/282f00b7-2777-44ec-85fc-9751a6148a99-kube-api-access-nkvh7\") on node \"crc\" DevicePath \"\"" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.326213 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.326222 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/282f00b7-2777-44ec-85fc-9751a6148a99-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.952214 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zgftd" event={"ID":"282f00b7-2777-44ec-85fc-9751a6148a99","Type":"ContainerDied","Data":"fff11244eee1f9503c2be393a4a2ece9385e42eea665878b862f71c7d11fe024"} Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.952253 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zgftd" Jan 30 18:44:12 crc kubenswrapper[4712]: I0130 18:44:12.952580 4712 scope.go:117] "RemoveContainer" containerID="74ae7217c63a301220bc4dcaf5a5d710d9f79a5d3daceaa6e206f1e85ec2e9c4" Jan 30 18:44:13 crc kubenswrapper[4712]: I0130 18:44:13.112719 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgftd"] Jan 30 18:44:13 crc kubenswrapper[4712]: I0130 18:44:13.114428 4712 scope.go:117] "RemoveContainer" containerID="d915e1e51dcc1b6e73154778ed59701e3b9f431df64fec50a0f30da026e95dc4" Jan 30 18:44:13 crc kubenswrapper[4712]: I0130 18:44:13.123214 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zgftd"] Jan 30 18:44:13 crc kubenswrapper[4712]: I0130 18:44:13.171838 4712 scope.go:117] "RemoveContainer" containerID="85a2656b299924ba8a3f5a6474daef1793629720b501ef03687b4a4dca811927" Jan 30 18:44:13 crc kubenswrapper[4712]: I0130 18:44:13.815291 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" path="/var/lib/kubelet/pods/282f00b7-2777-44ec-85fc-9751a6148a99/volumes" Jan 30 18:44:19 crc kubenswrapper[4712]: I0130 18:44:19.265164 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:19 crc kubenswrapper[4712]: I0130 18:44:19.326663 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:19 crc kubenswrapper[4712]: I0130 18:44:19.510525 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7pck"] Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.055176 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-v7pck" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="registry-server" containerID="cri-o://e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649" gracePeriod=2 Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.592386 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.727255 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-utilities\") pod \"99347680-bc5f-4a15-973c-c6474d4fba98\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.727739 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2sg9j\" (UniqueName: \"kubernetes.io/projected/99347680-bc5f-4a15-973c-c6474d4fba98-kube-api-access-2sg9j\") pod \"99347680-bc5f-4a15-973c-c6474d4fba98\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.727834 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-catalog-content\") pod \"99347680-bc5f-4a15-973c-c6474d4fba98\" (UID: \"99347680-bc5f-4a15-973c-c6474d4fba98\") " Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.728288 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-utilities" (OuterVolumeSpecName: "utilities") pod "99347680-bc5f-4a15-973c-c6474d4fba98" (UID: "99347680-bc5f-4a15-973c-c6474d4fba98"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.728650 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.740472 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99347680-bc5f-4a15-973c-c6474d4fba98-kube-api-access-2sg9j" (OuterVolumeSpecName: "kube-api-access-2sg9j") pod "99347680-bc5f-4a15-973c-c6474d4fba98" (UID: "99347680-bc5f-4a15-973c-c6474d4fba98"). InnerVolumeSpecName "kube-api-access-2sg9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:44:21 crc kubenswrapper[4712]: I0130 18:44:21.778348 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "99347680-bc5f-4a15-973c-c6474d4fba98" (UID: "99347680-bc5f-4a15-973c-c6474d4fba98"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.067508 4712 generic.go:334] "Generic (PLEG): container finished" podID="99347680-bc5f-4a15-973c-c6474d4fba98" containerID="e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649" exitCode=0 Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.067615 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-v7pck" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.133649 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2sg9j\" (UniqueName: \"kubernetes.io/projected/99347680-bc5f-4a15-973c-c6474d4fba98-kube-api-access-2sg9j\") on node \"crc\" DevicePath \"\"" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.133874 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerDied","Data":"e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649"} Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.133948 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-v7pck" event={"ID":"99347680-bc5f-4a15-973c-c6474d4fba98","Type":"ContainerDied","Data":"2f53bf39b61a49b7b872a78ed7446b47066638a2806b37df016f5398983edd97"} Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.133987 4712 scope.go:117] "RemoveContainer" containerID="e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.135316 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/99347680-bc5f-4a15-973c-c6474d4fba98-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.194444 4712 scope.go:117] "RemoveContainer" containerID="ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.204364 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-v7pck"] Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.212273 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-v7pck"] Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.219853 4712 scope.go:117] "RemoveContainer" containerID="8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.275466 4712 scope.go:117] "RemoveContainer" containerID="e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649" Jan 30 18:44:22 crc kubenswrapper[4712]: E0130 18:44:22.275825 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649\": container with ID starting with e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649 not found: ID does not exist" containerID="e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.275858 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649"} err="failed to get container status \"e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649\": rpc error: code = NotFound desc = could not find container \"e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649\": container with ID starting with e64255e1453de4701a6febf0564956817cc07bc1b5808f355cb3aa6218d61649 not found: ID does not exist" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.275878 4712 scope.go:117] "RemoveContainer" containerID="ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7" Jan 30 18:44:22 crc kubenswrapper[4712]: E0130 18:44:22.276361 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7\": container with ID starting with ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7 not found: ID does not exist" containerID="ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.276381 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7"} err="failed to get container status \"ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7\": rpc error: code = NotFound desc = could not find container \"ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7\": container with ID starting with ddc2f9db15711ce199431f0659ca3045178b44d73a76c97ad9f8b465c94927e7 not found: ID does not exist" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.276395 4712 scope.go:117] "RemoveContainer" containerID="8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e" Jan 30 18:44:22 crc kubenswrapper[4712]: E0130 18:44:22.276580 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e\": container with ID starting with 8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e not found: ID does not exist" containerID="8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e" Jan 30 18:44:22 crc kubenswrapper[4712]: I0130 18:44:22.276601 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e"} err="failed to get container status \"8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e\": rpc error: code = NotFound desc = could not find container \"8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e\": container with ID starting with 8ddbfe0db7454e56226d573019d9506ad94a9e5a5dfc1074c471fff1acdecc8e not found: ID does not exist" Jan 30 18:44:23 crc kubenswrapper[4712]: I0130 18:44:23.820547 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:44:23 crc kubenswrapper[4712]: E0130 18:44:23.821230 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:44:23 crc kubenswrapper[4712]: I0130 18:44:23.822465 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" path="/var/lib/kubelet/pods/99347680-bc5f-4a15-973c-c6474d4fba98/volumes" Jan 30 18:44:36 crc kubenswrapper[4712]: I0130 18:44:36.800288 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:44:36 crc kubenswrapper[4712]: E0130 18:44:36.801250 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:44:50 crc kubenswrapper[4712]: I0130 18:44:50.799672 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:44:50 crc kubenswrapper[4712]: E0130 18:44:50.800432 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.171465 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5"] Jan 30 18:45:00 crc kubenswrapper[4712]: E0130 18:45:00.172451 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="extract-content" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172487 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="extract-content" Jan 30 18:45:00 crc kubenswrapper[4712]: E0130 18:45:00.172500 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="extract-utilities" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172510 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="extract-utilities" Jan 30 18:45:00 crc kubenswrapper[4712]: E0130 18:45:00.172524 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="registry-server" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172532 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="registry-server" Jan 30 18:45:00 crc kubenswrapper[4712]: E0130 18:45:00.172557 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="extract-content" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172562 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="extract-content" Jan 30 18:45:00 crc kubenswrapper[4712]: E0130 18:45:00.172583 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="registry-server" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172589 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="registry-server" Jan 30 18:45:00 crc kubenswrapper[4712]: E0130 18:45:00.172605 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="extract-utilities" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172611 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="extract-utilities" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.172964 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="99347680-bc5f-4a15-973c-c6474d4fba98" containerName="registry-server" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.173002 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="282f00b7-2777-44ec-85fc-9751a6148a99" containerName="registry-server" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.175060 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.180845 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5"] Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.186826 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.186883 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.365923 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65cb453f-1797-4e40-9cb8-612b3beaa871-config-volume\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.365976 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-254b9\" (UniqueName: \"kubernetes.io/projected/65cb453f-1797-4e40-9cb8-612b3beaa871-kube-api-access-254b9\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.366125 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65cb453f-1797-4e40-9cb8-612b3beaa871-secret-volume\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.467900 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65cb453f-1797-4e40-9cb8-612b3beaa871-secret-volume\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.468035 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65cb453f-1797-4e40-9cb8-612b3beaa871-config-volume\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.468077 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-254b9\" (UniqueName: \"kubernetes.io/projected/65cb453f-1797-4e40-9cb8-612b3beaa871-kube-api-access-254b9\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.468787 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65cb453f-1797-4e40-9cb8-612b3beaa871-config-volume\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.475189 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65cb453f-1797-4e40-9cb8-612b3beaa871-secret-volume\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.492663 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-254b9\" (UniqueName: \"kubernetes.io/projected/65cb453f-1797-4e40-9cb8-612b3beaa871-kube-api-access-254b9\") pod \"collect-profiles-29496645-qkjs5\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:00 crc kubenswrapper[4712]: I0130 18:45:00.506983 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:01 crc kubenswrapper[4712]: I0130 18:45:01.000298 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5"] Jan 30 18:45:01 crc kubenswrapper[4712]: I0130 18:45:01.496121 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" event={"ID":"65cb453f-1797-4e40-9cb8-612b3beaa871","Type":"ContainerStarted","Data":"25eda162b96d3d5a73e884be7a1e55e70e6cfb9a9a2e94ce712f34e4eeda991b"} Jan 30 18:45:01 crc kubenswrapper[4712]: I0130 18:45:01.496518 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" event={"ID":"65cb453f-1797-4e40-9cb8-612b3beaa871","Type":"ContainerStarted","Data":"75a9fd0675dde6c41404d3769ddac94593e16c74e42ca3434617366148b84061"} Jan 30 18:45:01 crc kubenswrapper[4712]: I0130 18:45:01.522481 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" podStartSLOduration=1.5224615460000002 podStartE2EDuration="1.522461546s" podCreationTimestamp="2026-01-30 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:45:01.512158846 +0000 UTC m=+6638.419168325" watchObservedRunningTime="2026-01-30 18:45:01.522461546 +0000 UTC m=+6638.429471025" Jan 30 18:45:01 crc kubenswrapper[4712]: I0130 18:45:01.799530 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:45:01 crc kubenswrapper[4712]: E0130 18:45:01.799758 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:45:02 crc kubenswrapper[4712]: I0130 18:45:02.505605 4712 generic.go:334] "Generic (PLEG): container finished" podID="65cb453f-1797-4e40-9cb8-612b3beaa871" containerID="25eda162b96d3d5a73e884be7a1e55e70e6cfb9a9a2e94ce712f34e4eeda991b" exitCode=0 Jan 30 18:45:02 crc kubenswrapper[4712]: I0130 18:45:02.505649 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" event={"ID":"65cb453f-1797-4e40-9cb8-612b3beaa871","Type":"ContainerDied","Data":"25eda162b96d3d5a73e884be7a1e55e70e6cfb9a9a2e94ce712f34e4eeda991b"} Jan 30 18:45:03 crc kubenswrapper[4712]: I0130 18:45:03.913768 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.037415 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-254b9\" (UniqueName: \"kubernetes.io/projected/65cb453f-1797-4e40-9cb8-612b3beaa871-kube-api-access-254b9\") pod \"65cb453f-1797-4e40-9cb8-612b3beaa871\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.037497 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65cb453f-1797-4e40-9cb8-612b3beaa871-secret-volume\") pod \"65cb453f-1797-4e40-9cb8-612b3beaa871\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.037671 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65cb453f-1797-4e40-9cb8-612b3beaa871-config-volume\") pod \"65cb453f-1797-4e40-9cb8-612b3beaa871\" (UID: \"65cb453f-1797-4e40-9cb8-612b3beaa871\") " Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.039760 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/65cb453f-1797-4e40-9cb8-612b3beaa871-config-volume" (OuterVolumeSpecName: "config-volume") pod "65cb453f-1797-4e40-9cb8-612b3beaa871" (UID: "65cb453f-1797-4e40-9cb8-612b3beaa871"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.046563 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/65cb453f-1797-4e40-9cb8-612b3beaa871-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "65cb453f-1797-4e40-9cb8-612b3beaa871" (UID: "65cb453f-1797-4e40-9cb8-612b3beaa871"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.046653 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65cb453f-1797-4e40-9cb8-612b3beaa871-kube-api-access-254b9" (OuterVolumeSpecName: "kube-api-access-254b9") pod "65cb453f-1797-4e40-9cb8-612b3beaa871" (UID: "65cb453f-1797-4e40-9cb8-612b3beaa871"). InnerVolumeSpecName "kube-api-access-254b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.139611 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-254b9\" (UniqueName: \"kubernetes.io/projected/65cb453f-1797-4e40-9cb8-612b3beaa871-kube-api-access-254b9\") on node \"crc\" DevicePath \"\"" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.139653 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/65cb453f-1797-4e40-9cb8-612b3beaa871-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.139668 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/65cb453f-1797-4e40-9cb8-612b3beaa871-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.533046 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" event={"ID":"65cb453f-1797-4e40-9cb8-612b3beaa871","Type":"ContainerDied","Data":"75a9fd0675dde6c41404d3769ddac94593e16c74e42ca3434617366148b84061"} Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.533536 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75a9fd0675dde6c41404d3769ddac94593e16c74e42ca3434617366148b84061" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.533283 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5" Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.602617 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv"] Jan 30 18:45:04 crc kubenswrapper[4712]: I0130 18:45:04.617272 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496600-2z9cv"] Jan 30 18:45:05 crc kubenswrapper[4712]: I0130 18:45:05.829049 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad51586a-58c7-4e2e-8098-9e58e9559c5c" path="/var/lib/kubelet/pods/ad51586a-58c7-4e2e-8098-9e58e9559c5c/volumes" Jan 30 18:45:14 crc kubenswrapper[4712]: I0130 18:45:14.800002 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:45:14 crc kubenswrapper[4712]: E0130 18:45:14.801159 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:45:26 crc kubenswrapper[4712]: I0130 18:45:26.799721 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:45:26 crc kubenswrapper[4712]: E0130 18:45:26.800519 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:45:38 crc kubenswrapper[4712]: I0130 18:45:38.800049 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:45:38 crc kubenswrapper[4712]: E0130 18:45:38.800973 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:45:52 crc kubenswrapper[4712]: I0130 18:45:52.800192 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:45:52 crc kubenswrapper[4712]: E0130 18:45:52.801411 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:46:03 crc kubenswrapper[4712]: I0130 18:46:03.363506 4712 scope.go:117] "RemoveContainer" containerID="340e116b884767f98ef42952e9088e368ff9023cf652523a9cf66aa46a832c2f" Jan 30 18:46:07 crc kubenswrapper[4712]: I0130 18:46:07.799348 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:46:07 crc kubenswrapper[4712]: E0130 18:46:07.800077 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:46:20 crc kubenswrapper[4712]: I0130 18:46:20.800083 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:46:20 crc kubenswrapper[4712]: E0130 18:46:20.801773 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.034835 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-twdwf"] Jan 30 18:46:26 crc kubenswrapper[4712]: E0130 18:46:26.035830 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65cb453f-1797-4e40-9cb8-612b3beaa871" containerName="collect-profiles" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.035845 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="65cb453f-1797-4e40-9cb8-612b3beaa871" containerName="collect-profiles" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.036086 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="65cb453f-1797-4e40-9cb8-612b3beaa871" containerName="collect-profiles" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.041776 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.052939 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twdwf"] Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.110306 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9zr5\" (UniqueName: \"kubernetes.io/projected/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-kube-api-access-q9zr5\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.110368 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-utilities\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.110396 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-catalog-content\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.211908 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9zr5\" (UniqueName: \"kubernetes.io/projected/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-kube-api-access-q9zr5\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.211973 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-utilities\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.211999 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-catalog-content\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.212513 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-catalog-content\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.212611 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-utilities\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.237824 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9zr5\" (UniqueName: \"kubernetes.io/projected/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-kube-api-access-q9zr5\") pod \"certified-operators-twdwf\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.373403 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:26 crc kubenswrapper[4712]: I0130 18:46:26.894554 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-twdwf"] Jan 30 18:46:27 crc kubenswrapper[4712]: I0130 18:46:27.399090 4712 generic.go:334] "Generic (PLEG): container finished" podID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerID="2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6" exitCode=0 Jan 30 18:46:27 crc kubenswrapper[4712]: I0130 18:46:27.399142 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerDied","Data":"2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6"} Jan 30 18:46:27 crc kubenswrapper[4712]: I0130 18:46:27.399396 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerStarted","Data":"f5b430dace10414e720e64d181cf82c88b9dbb9ce9f4db52c35f64db18dff393"} Jan 30 18:46:28 crc kubenswrapper[4712]: I0130 18:46:28.408003 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerStarted","Data":"a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b"} Jan 30 18:46:30 crc kubenswrapper[4712]: I0130 18:46:30.439785 4712 generic.go:334] "Generic (PLEG): container finished" podID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerID="a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b" exitCode=0 Jan 30 18:46:30 crc kubenswrapper[4712]: I0130 18:46:30.439885 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerDied","Data":"a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b"} Jan 30 18:46:31 crc kubenswrapper[4712]: I0130 18:46:31.450745 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerStarted","Data":"79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559"} Jan 30 18:46:31 crc kubenswrapper[4712]: I0130 18:46:31.472929 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-twdwf" podStartSLOduration=2.040774592 podStartE2EDuration="5.472911607s" podCreationTimestamp="2026-01-30 18:46:26 +0000 UTC" firstStartedPulling="2026-01-30 18:46:27.402086368 +0000 UTC m=+6724.309095837" lastFinishedPulling="2026-01-30 18:46:30.834223383 +0000 UTC m=+6727.741232852" observedRunningTime="2026-01-30 18:46:31.467538466 +0000 UTC m=+6728.374547935" watchObservedRunningTime="2026-01-30 18:46:31.472911607 +0000 UTC m=+6728.379921066" Jan 30 18:46:32 crc kubenswrapper[4712]: I0130 18:46:32.804507 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:46:32 crc kubenswrapper[4712]: E0130 18:46:32.805832 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:46:36 crc kubenswrapper[4712]: I0130 18:46:36.373837 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:36 crc kubenswrapper[4712]: I0130 18:46:36.374370 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:36 crc kubenswrapper[4712]: I0130 18:46:36.436210 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:36 crc kubenswrapper[4712]: I0130 18:46:36.536469 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:38 crc kubenswrapper[4712]: I0130 18:46:38.224352 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twdwf"] Jan 30 18:46:38 crc kubenswrapper[4712]: I0130 18:46:38.509949 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-twdwf" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="registry-server" containerID="cri-o://79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559" gracePeriod=2 Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.005251 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.054890 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-utilities\") pod \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.054983 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-catalog-content\") pod \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.055016 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9zr5\" (UniqueName: \"kubernetes.io/projected/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-kube-api-access-q9zr5\") pod \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\" (UID: \"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5\") " Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.055586 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-utilities" (OuterVolumeSpecName: "utilities") pod "58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" (UID: "58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.060963 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-kube-api-access-q9zr5" (OuterVolumeSpecName: "kube-api-access-q9zr5") pod "58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" (UID: "58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5"). InnerVolumeSpecName "kube-api-access-q9zr5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.106075 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" (UID: "58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.157159 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.157201 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9zr5\" (UniqueName: \"kubernetes.io/projected/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-kube-api-access-q9zr5\") on node \"crc\" DevicePath \"\"" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.157244 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.528207 4712 generic.go:334] "Generic (PLEG): container finished" podID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerID="79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559" exitCode=0 Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.528247 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerDied","Data":"79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559"} Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.528305 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-twdwf" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.528334 4712 scope.go:117] "RemoveContainer" containerID="79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.528315 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-twdwf" event={"ID":"58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5","Type":"ContainerDied","Data":"f5b430dace10414e720e64d181cf82c88b9dbb9ce9f4db52c35f64db18dff393"} Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.567509 4712 scope.go:117] "RemoveContainer" containerID="a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.588245 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-twdwf"] Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.600317 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-twdwf"] Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.605524 4712 scope.go:117] "RemoveContainer" containerID="2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.658116 4712 scope.go:117] "RemoveContainer" containerID="79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559" Jan 30 18:46:39 crc kubenswrapper[4712]: E0130 18:46:39.658491 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559\": container with ID starting with 79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559 not found: ID does not exist" containerID="79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.658541 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559"} err="failed to get container status \"79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559\": rpc error: code = NotFound desc = could not find container \"79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559\": container with ID starting with 79428a4682c3cdb524290bdefeaf2e923922dbb0247ee33efe0ed23478642559 not found: ID does not exist" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.658574 4712 scope.go:117] "RemoveContainer" containerID="a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b" Jan 30 18:46:39 crc kubenswrapper[4712]: E0130 18:46:39.658890 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b\": container with ID starting with a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b not found: ID does not exist" containerID="a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.658951 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b"} err="failed to get container status \"a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b\": rpc error: code = NotFound desc = could not find container \"a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b\": container with ID starting with a0fe03b25a74dedb16d53f34f725b3537a1c39dddf708687bd5ab82972bce32b not found: ID does not exist" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.658973 4712 scope.go:117] "RemoveContainer" containerID="2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6" Jan 30 18:46:39 crc kubenswrapper[4712]: E0130 18:46:39.659190 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6\": container with ID starting with 2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6 not found: ID does not exist" containerID="2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.659214 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6"} err="failed to get container status \"2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6\": rpc error: code = NotFound desc = could not find container \"2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6\": container with ID starting with 2114821a222ef9ff56c2a774e46467aee18912efaf1a96459a0fe443ef9ed9d6 not found: ID does not exist" Jan 30 18:46:39 crc kubenswrapper[4712]: I0130 18:46:39.814028 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" path="/var/lib/kubelet/pods/58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5/volumes" Jan 30 18:46:43 crc kubenswrapper[4712]: I0130 18:46:43.811054 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:46:43 crc kubenswrapper[4712]: E0130 18:46:43.813589 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:46:54 crc kubenswrapper[4712]: I0130 18:46:54.801294 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:46:54 crc kubenswrapper[4712]: E0130 18:46:54.802114 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:47:08 crc kubenswrapper[4712]: I0130 18:47:08.799725 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:47:09 crc kubenswrapper[4712]: I0130 18:47:09.899051 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"d6452fcf102ee8c40a2bde4f469ce25f7f64d382adae3c1cca7d159ec30290eb"} Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.273698 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nkns2"] Jan 30 18:48:47 crc kubenswrapper[4712]: E0130 18:48:47.274728 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="extract-utilities" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.274743 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="extract-utilities" Jan 30 18:48:47 crc kubenswrapper[4712]: E0130 18:48:47.274772 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="extract-content" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.274780 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="extract-content" Jan 30 18:48:47 crc kubenswrapper[4712]: E0130 18:48:47.276272 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="registry-server" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.276296 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="registry-server" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.276558 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d10d2b-1a6f-4952-a4b7-db1df0e7bcc5" containerName="registry-server" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.278257 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.296026 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkns2"] Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.324893 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjb9s\" (UniqueName: \"kubernetes.io/projected/b789c4f1-95cf-4f60-8636-d340a98758bf-kube-api-access-pjb9s\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.324969 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-utilities\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.325016 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-catalog-content\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.426678 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjb9s\" (UniqueName: \"kubernetes.io/projected/b789c4f1-95cf-4f60-8636-d340a98758bf-kube-api-access-pjb9s\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.426738 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-utilities\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.426775 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-catalog-content\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.427337 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-catalog-content\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.427511 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-utilities\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.454035 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjb9s\" (UniqueName: \"kubernetes.io/projected/b789c4f1-95cf-4f60-8636-d340a98758bf-kube-api-access-pjb9s\") pod \"redhat-operators-nkns2\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:47 crc kubenswrapper[4712]: I0130 18:48:47.652943 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:48 crc kubenswrapper[4712]: I0130 18:48:48.144137 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nkns2"] Jan 30 18:48:48 crc kubenswrapper[4712]: I0130 18:48:48.958552 4712 generic.go:334] "Generic (PLEG): container finished" podID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerID="07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6" exitCode=0 Jan 30 18:48:48 crc kubenswrapper[4712]: I0130 18:48:48.958782 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerDied","Data":"07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6"} Jan 30 18:48:48 crc kubenswrapper[4712]: I0130 18:48:48.958824 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerStarted","Data":"161506b18bc6897339624a881f7dbfe6740f8d28417c65d1e35e5aad2703dc34"} Jan 30 18:48:49 crc kubenswrapper[4712]: I0130 18:48:49.974198 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerStarted","Data":"65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f"} Jan 30 18:48:55 crc kubenswrapper[4712]: I0130 18:48:55.049268 4712 generic.go:334] "Generic (PLEG): container finished" podID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerID="65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f" exitCode=0 Jan 30 18:48:55 crc kubenswrapper[4712]: I0130 18:48:55.049392 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerDied","Data":"65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f"} Jan 30 18:48:55 crc kubenswrapper[4712]: I0130 18:48:55.057423 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:48:56 crc kubenswrapper[4712]: I0130 18:48:56.066917 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerStarted","Data":"dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e"} Jan 30 18:48:56 crc kubenswrapper[4712]: I0130 18:48:56.106953 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nkns2" podStartSLOduration=2.577897065 podStartE2EDuration="9.106917349s" podCreationTimestamp="2026-01-30 18:48:47 +0000 UTC" firstStartedPulling="2026-01-30 18:48:48.962002833 +0000 UTC m=+6865.869012342" lastFinishedPulling="2026-01-30 18:48:55.491023157 +0000 UTC m=+6872.398032626" observedRunningTime="2026-01-30 18:48:56.09949647 +0000 UTC m=+6873.006505979" watchObservedRunningTime="2026-01-30 18:48:56.106917349 +0000 UTC m=+6873.013926858" Jan 30 18:48:57 crc kubenswrapper[4712]: I0130 18:48:57.653939 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:57 crc kubenswrapper[4712]: I0130 18:48:57.654259 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:48:58 crc kubenswrapper[4712]: I0130 18:48:58.722481 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkns2" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:48:58 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:48:58 crc kubenswrapper[4712]: > Jan 30 18:49:08 crc kubenswrapper[4712]: I0130 18:49:08.723840 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkns2" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:49:08 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:49:08 crc kubenswrapper[4712]: > Jan 30 18:49:18 crc kubenswrapper[4712]: I0130 18:49:18.708232 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nkns2" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" probeResult="failure" output=< Jan 30 18:49:18 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:49:18 crc kubenswrapper[4712]: > Jan 30 18:49:27 crc kubenswrapper[4712]: I0130 18:49:27.738102 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:49:27 crc kubenswrapper[4712]: I0130 18:49:27.813462 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:49:27 crc kubenswrapper[4712]: I0130 18:49:27.975627 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkns2"] Jan 30 18:49:29 crc kubenswrapper[4712]: I0130 18:49:29.398004 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nkns2" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" containerID="cri-o://dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e" gracePeriod=2 Jan 30 18:49:29 crc kubenswrapper[4712]: I0130 18:49:29.930251 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.046248 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjb9s\" (UniqueName: \"kubernetes.io/projected/b789c4f1-95cf-4f60-8636-d340a98758bf-kube-api-access-pjb9s\") pod \"b789c4f1-95cf-4f60-8636-d340a98758bf\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.046664 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-utilities\") pod \"b789c4f1-95cf-4f60-8636-d340a98758bf\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.046729 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-catalog-content\") pod \"b789c4f1-95cf-4f60-8636-d340a98758bf\" (UID: \"b789c4f1-95cf-4f60-8636-d340a98758bf\") " Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.047331 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-utilities" (OuterVolumeSpecName: "utilities") pod "b789c4f1-95cf-4f60-8636-d340a98758bf" (UID: "b789c4f1-95cf-4f60-8636-d340a98758bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.051352 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b789c4f1-95cf-4f60-8636-d340a98758bf-kube-api-access-pjb9s" (OuterVolumeSpecName: "kube-api-access-pjb9s") pod "b789c4f1-95cf-4f60-8636-d340a98758bf" (UID: "b789c4f1-95cf-4f60-8636-d340a98758bf"). InnerVolumeSpecName "kube-api-access-pjb9s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.144458 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b789c4f1-95cf-4f60-8636-d340a98758bf" (UID: "b789c4f1-95cf-4f60-8636-d340a98758bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.149416 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjb9s\" (UniqueName: \"kubernetes.io/projected/b789c4f1-95cf-4f60-8636-d340a98758bf-kube-api-access-pjb9s\") on node \"crc\" DevicePath \"\"" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.149443 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.149453 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b789c4f1-95cf-4f60-8636-d340a98758bf-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.406664 4712 generic.go:334] "Generic (PLEG): container finished" podID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerID="dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e" exitCode=0 Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.406704 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerDied","Data":"dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e"} Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.406725 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nkns2" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.406737 4712 scope.go:117] "RemoveContainer" containerID="dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.406727 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nkns2" event={"ID":"b789c4f1-95cf-4f60-8636-d340a98758bf","Type":"ContainerDied","Data":"161506b18bc6897339624a881f7dbfe6740f8d28417c65d1e35e5aad2703dc34"} Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.431611 4712 scope.go:117] "RemoveContainer" containerID="65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.454409 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nkns2"] Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.462079 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nkns2"] Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.468530 4712 scope.go:117] "RemoveContainer" containerID="07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.496128 4712 scope.go:117] "RemoveContainer" containerID="dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e" Jan 30 18:49:30 crc kubenswrapper[4712]: E0130 18:49:30.496533 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e\": container with ID starting with dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e not found: ID does not exist" containerID="dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.496562 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e"} err="failed to get container status \"dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e\": rpc error: code = NotFound desc = could not find container \"dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e\": container with ID starting with dc0103cdf757d0e804ca895b9900082b77847c6b22347c55363f7fb79ebc8e8e not found: ID does not exist" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.496583 4712 scope.go:117] "RemoveContainer" containerID="65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f" Jan 30 18:49:30 crc kubenswrapper[4712]: E0130 18:49:30.497094 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f\": container with ID starting with 65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f not found: ID does not exist" containerID="65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.497114 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f"} err="failed to get container status \"65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f\": rpc error: code = NotFound desc = could not find container \"65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f\": container with ID starting with 65a96279e6728b0c0394f08218c3a9a6198b27b42645cd2595c03c3b41d00a8f not found: ID does not exist" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.497127 4712 scope.go:117] "RemoveContainer" containerID="07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6" Jan 30 18:49:30 crc kubenswrapper[4712]: E0130 18:49:30.497416 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6\": container with ID starting with 07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6 not found: ID does not exist" containerID="07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6" Jan 30 18:49:30 crc kubenswrapper[4712]: I0130 18:49:30.497437 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6"} err="failed to get container status \"07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6\": rpc error: code = NotFound desc = could not find container \"07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6\": container with ID starting with 07d06f83e3e8eb1f1433daeb838eb99109c6e0e7fa47ac228c0cb1fce116d3d6 not found: ID does not exist" Jan 30 18:49:31 crc kubenswrapper[4712]: I0130 18:49:31.813904 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" path="/var/lib/kubelet/pods/b789c4f1-95cf-4f60-8636-d340a98758bf/volumes" Jan 30 18:49:36 crc kubenswrapper[4712]: I0130 18:49:36.271129 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:49:36 crc kubenswrapper[4712]: I0130 18:49:36.271734 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:50:06 crc kubenswrapper[4712]: I0130 18:50:06.270890 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:50:06 crc kubenswrapper[4712]: I0130 18:50:06.271517 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:50:36 crc kubenswrapper[4712]: I0130 18:50:36.271742 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:50:36 crc kubenswrapper[4712]: I0130 18:50:36.272249 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:50:36 crc kubenswrapper[4712]: I0130 18:50:36.272285 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:50:36 crc kubenswrapper[4712]: I0130 18:50:36.272929 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6452fcf102ee8c40a2bde4f469ce25f7f64d382adae3c1cca7d159ec30290eb"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:50:36 crc kubenswrapper[4712]: I0130 18:50:36.272974 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://d6452fcf102ee8c40a2bde4f469ce25f7f64d382adae3c1cca7d159ec30290eb" gracePeriod=600 Jan 30 18:50:37 crc kubenswrapper[4712]: I0130 18:50:37.060830 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"d6452fcf102ee8c40a2bde4f469ce25f7f64d382adae3c1cca7d159ec30290eb"} Jan 30 18:50:37 crc kubenswrapper[4712]: I0130 18:50:37.061194 4712 scope.go:117] "RemoveContainer" containerID="ffefb42f903798bc1d834430f92372f13a61068f2aecfe189f32e02747461feb" Jan 30 18:50:37 crc kubenswrapper[4712]: I0130 18:50:37.060829 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="d6452fcf102ee8c40a2bde4f469ce25f7f64d382adae3c1cca7d159ec30290eb" exitCode=0 Jan 30 18:50:37 crc kubenswrapper[4712]: I0130 18:50:37.061570 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52"} Jan 30 18:52:36 crc kubenswrapper[4712]: I0130 18:52:36.271275 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:52:36 crc kubenswrapper[4712]: I0130 18:52:36.271859 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:53:06 crc kubenswrapper[4712]: I0130 18:53:06.270970 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:53:06 crc kubenswrapper[4712]: I0130 18:53:06.271828 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.271481 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.272183 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.272247 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.273218 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.273316 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" gracePeriod=600 Jan 30 18:53:36 crc kubenswrapper[4712]: E0130 18:53:36.396906 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.977913 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" exitCode=0 Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.977978 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52"} Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.978297 4712 scope.go:117] "RemoveContainer" containerID="d6452fcf102ee8c40a2bde4f469ce25f7f64d382adae3c1cca7d159ec30290eb" Jan 30 18:53:36 crc kubenswrapper[4712]: I0130 18:53:36.979048 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:53:36 crc kubenswrapper[4712]: E0130 18:53:36.979367 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.531908 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4n969"] Jan 30 18:53:49 crc kubenswrapper[4712]: E0130 18:53:49.532770 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="extract-content" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.532782 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="extract-content" Jan 30 18:53:49 crc kubenswrapper[4712]: E0130 18:53:49.532794 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.532817 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" Jan 30 18:53:49 crc kubenswrapper[4712]: E0130 18:53:49.532841 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="extract-utilities" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.532847 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="extract-utilities" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.533017 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b789c4f1-95cf-4f60-8636-d340a98758bf" containerName="registry-server" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.534514 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.544608 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n969"] Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.645563 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-catalog-content\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.645717 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn65m\" (UniqueName: \"kubernetes.io/projected/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-kube-api-access-xn65m\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.645767 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-utilities\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.748002 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-utilities\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.748092 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-catalog-content\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.748236 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xn65m\" (UniqueName: \"kubernetes.io/projected/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-kube-api-access-xn65m\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.749082 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-utilities\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.749293 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-catalog-content\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.769639 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn65m\" (UniqueName: \"kubernetes.io/projected/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-kube-api-access-xn65m\") pod \"redhat-marketplace-4n969\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:49 crc kubenswrapper[4712]: I0130 18:53:49.856654 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:50 crc kubenswrapper[4712]: I0130 18:53:50.338588 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n969"] Jan 30 18:53:50 crc kubenswrapper[4712]: I0130 18:53:50.800093 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:53:50 crc kubenswrapper[4712]: E0130 18:53:50.800769 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:53:51 crc kubenswrapper[4712]: I0130 18:53:51.116754 4712 generic.go:334] "Generic (PLEG): container finished" podID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerID="df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf" exitCode=0 Jan 30 18:53:51 crc kubenswrapper[4712]: I0130 18:53:51.116923 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerDied","Data":"df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf"} Jan 30 18:53:51 crc kubenswrapper[4712]: I0130 18:53:51.116974 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerStarted","Data":"f16682edb5a364eb80b8debe4511328d0061974d81feb26ca63ec7b4e3283698"} Jan 30 18:53:52 crc kubenswrapper[4712]: I0130 18:53:52.131198 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerStarted","Data":"f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e"} Jan 30 18:53:53 crc kubenswrapper[4712]: I0130 18:53:53.142010 4712 generic.go:334] "Generic (PLEG): container finished" podID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerID="f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e" exitCode=0 Jan 30 18:53:53 crc kubenswrapper[4712]: I0130 18:53:53.142057 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerDied","Data":"f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e"} Jan 30 18:53:54 crc kubenswrapper[4712]: I0130 18:53:54.156411 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerStarted","Data":"e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d"} Jan 30 18:53:54 crc kubenswrapper[4712]: I0130 18:53:54.197639 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4n969" podStartSLOduration=2.549290483 podStartE2EDuration="5.1976089s" podCreationTimestamp="2026-01-30 18:53:49 +0000 UTC" firstStartedPulling="2026-01-30 18:53:51.119072314 +0000 UTC m=+7168.026081813" lastFinishedPulling="2026-01-30 18:53:53.767390761 +0000 UTC m=+7170.674400230" observedRunningTime="2026-01-30 18:53:54.183187832 +0000 UTC m=+7171.090197331" watchObservedRunningTime="2026-01-30 18:53:54.1976089 +0000 UTC m=+7171.104618379" Jan 30 18:53:59 crc kubenswrapper[4712]: I0130 18:53:59.857579 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:59 crc kubenswrapper[4712]: I0130 18:53:59.858178 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:53:59 crc kubenswrapper[4712]: I0130 18:53:59.909735 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:54:00 crc kubenswrapper[4712]: I0130 18:54:00.294394 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:54:00 crc kubenswrapper[4712]: I0130 18:54:00.368961 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n969"] Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.247125 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4n969" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="registry-server" containerID="cri-o://e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d" gracePeriod=2 Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.807837 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.971126 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-catalog-content\") pod \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.971557 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn65m\" (UniqueName: \"kubernetes.io/projected/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-kube-api-access-xn65m\") pod \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.971588 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-utilities\") pod \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\" (UID: \"e9c85da5-ba58-44cc-bc29-075a80b1e8ca\") " Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.974308 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-utilities" (OuterVolumeSpecName: "utilities") pod "e9c85da5-ba58-44cc-bc29-075a80b1e8ca" (UID: "e9c85da5-ba58-44cc-bc29-075a80b1e8ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.985139 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-kube-api-access-xn65m" (OuterVolumeSpecName: "kube-api-access-xn65m") pod "e9c85da5-ba58-44cc-bc29-075a80b1e8ca" (UID: "e9c85da5-ba58-44cc-bc29-075a80b1e8ca"). InnerVolumeSpecName "kube-api-access-xn65m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:54:02 crc kubenswrapper[4712]: I0130 18:54:02.997415 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9c85da5-ba58-44cc-bc29-075a80b1e8ca" (UID: "e9c85da5-ba58-44cc-bc29-075a80b1e8ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.074041 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.074071 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xn65m\" (UniqueName: \"kubernetes.io/projected/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-kube-api-access-xn65m\") on node \"crc\" DevicePath \"\"" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.074080 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9c85da5-ba58-44cc-bc29-075a80b1e8ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.258714 4712 generic.go:334] "Generic (PLEG): container finished" podID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerID="e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d" exitCode=0 Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.258755 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerDied","Data":"e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d"} Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.258781 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4n969" event={"ID":"e9c85da5-ba58-44cc-bc29-075a80b1e8ca","Type":"ContainerDied","Data":"f16682edb5a364eb80b8debe4511328d0061974d81feb26ca63ec7b4e3283698"} Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.258784 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4n969" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.258811 4712 scope.go:117] "RemoveContainer" containerID="e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.290278 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n969"] Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.307547 4712 scope.go:117] "RemoveContainer" containerID="f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.311146 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4n969"] Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.331324 4712 scope.go:117] "RemoveContainer" containerID="df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.390393 4712 scope.go:117] "RemoveContainer" containerID="e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d" Jan 30 18:54:03 crc kubenswrapper[4712]: E0130 18:54:03.391098 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d\": container with ID starting with e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d not found: ID does not exist" containerID="e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.391277 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d"} err="failed to get container status \"e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d\": rpc error: code = NotFound desc = could not find container \"e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d\": container with ID starting with e5b128751bedbd4dbb363590d5d2d48c5d4678104622469de37c015bdd3c147d not found: ID does not exist" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.391341 4712 scope.go:117] "RemoveContainer" containerID="f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e" Jan 30 18:54:03 crc kubenswrapper[4712]: E0130 18:54:03.392072 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e\": container with ID starting with f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e not found: ID does not exist" containerID="f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.392095 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e"} err="failed to get container status \"f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e\": rpc error: code = NotFound desc = could not find container \"f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e\": container with ID starting with f0203afe6f29084364c66dbb63e59119ebecb1df74fb5e82a160660d4502089e not found: ID does not exist" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.392110 4712 scope.go:117] "RemoveContainer" containerID="df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf" Jan 30 18:54:03 crc kubenswrapper[4712]: E0130 18:54:03.392613 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf\": container with ID starting with df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf not found: ID does not exist" containerID="df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.392634 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf"} err="failed to get container status \"df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf\": rpc error: code = NotFound desc = could not find container \"df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf\": container with ID starting with df4af23095f18759fe941c3b811a2be28693ba9d205e326c05d2ff9e305a5bbf not found: ID does not exist" Jan 30 18:54:03 crc kubenswrapper[4712]: I0130 18:54:03.820154 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" path="/var/lib/kubelet/pods/e9c85da5-ba58-44cc-bc29-075a80b1e8ca/volumes" Jan 30 18:54:04 crc kubenswrapper[4712]: I0130 18:54:04.799983 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:54:04 crc kubenswrapper[4712]: E0130 18:54:04.802189 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.672362 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4nmgj"] Jan 30 18:54:09 crc kubenswrapper[4712]: E0130 18:54:09.673486 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="extract-utilities" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.673505 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="extract-utilities" Jan 30 18:54:09 crc kubenswrapper[4712]: E0130 18:54:09.673524 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="extract-content" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.673531 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="extract-content" Jan 30 18:54:09 crc kubenswrapper[4712]: E0130 18:54:09.673550 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="registry-server" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.673558 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="registry-server" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.673811 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9c85da5-ba58-44cc-bc29-075a80b1e8ca" containerName="registry-server" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.675733 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.693728 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nmgj"] Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.737454 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ksbl\" (UniqueName: \"kubernetes.io/projected/f19793b0-28f8-4ee8-be35-416ec5fa353f-kube-api-access-5ksbl\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.737645 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-utilities\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.737714 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-catalog-content\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.839296 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-utilities\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.839375 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-catalog-content\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.839625 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ksbl\" (UniqueName: \"kubernetes.io/projected/f19793b0-28f8-4ee8-be35-416ec5fa353f-kube-api-access-5ksbl\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.839879 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-catalog-content\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.840214 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-utilities\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.863990 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ksbl\" (UniqueName: \"kubernetes.io/projected/f19793b0-28f8-4ee8-be35-416ec5fa353f-kube-api-access-5ksbl\") pod \"community-operators-4nmgj\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:09 crc kubenswrapper[4712]: I0130 18:54:09.994987 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:10 crc kubenswrapper[4712]: I0130 18:54:10.501236 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4nmgj"] Jan 30 18:54:10 crc kubenswrapper[4712]: W0130 18:54:10.503413 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf19793b0_28f8_4ee8_be35_416ec5fa353f.slice/crio-fa7b06d185fed20d8716c7d51826718687ab3fe42c193763401934ac1dd15a1d WatchSource:0}: Error finding container fa7b06d185fed20d8716c7d51826718687ab3fe42c193763401934ac1dd15a1d: Status 404 returned error can't find the container with id fa7b06d185fed20d8716c7d51826718687ab3fe42c193763401934ac1dd15a1d Jan 30 18:54:11 crc kubenswrapper[4712]: I0130 18:54:11.334625 4712 generic.go:334] "Generic (PLEG): container finished" podID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerID="bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9" exitCode=0 Jan 30 18:54:11 crc kubenswrapper[4712]: I0130 18:54:11.334909 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerDied","Data":"bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9"} Jan 30 18:54:11 crc kubenswrapper[4712]: I0130 18:54:11.334935 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerStarted","Data":"fa7b06d185fed20d8716c7d51826718687ab3fe42c193763401934ac1dd15a1d"} Jan 30 18:54:11 crc kubenswrapper[4712]: I0130 18:54:11.336846 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:54:12 crc kubenswrapper[4712]: I0130 18:54:12.350118 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerStarted","Data":"25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f"} Jan 30 18:54:14 crc kubenswrapper[4712]: I0130 18:54:14.378923 4712 generic.go:334] "Generic (PLEG): container finished" podID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerID="25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f" exitCode=0 Jan 30 18:54:14 crc kubenswrapper[4712]: I0130 18:54:14.379052 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerDied","Data":"25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f"} Jan 30 18:54:15 crc kubenswrapper[4712]: I0130 18:54:15.389991 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerStarted","Data":"fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f"} Jan 30 18:54:15 crc kubenswrapper[4712]: I0130 18:54:15.414675 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4nmgj" podStartSLOduration=2.957218537 podStartE2EDuration="6.414658563s" podCreationTimestamp="2026-01-30 18:54:09 +0000 UTC" firstStartedPulling="2026-01-30 18:54:11.33656944 +0000 UTC m=+7188.243578909" lastFinishedPulling="2026-01-30 18:54:14.794009466 +0000 UTC m=+7191.701018935" observedRunningTime="2026-01-30 18:54:15.409151969 +0000 UTC m=+7192.316161448" watchObservedRunningTime="2026-01-30 18:54:15.414658563 +0000 UTC m=+7192.321668032" Jan 30 18:54:15 crc kubenswrapper[4712]: I0130 18:54:15.799936 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:54:15 crc kubenswrapper[4712]: E0130 18:54:15.800487 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:54:19 crc kubenswrapper[4712]: I0130 18:54:19.995512 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:19 crc kubenswrapper[4712]: I0130 18:54:19.995911 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:20 crc kubenswrapper[4712]: I0130 18:54:20.060822 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:20 crc kubenswrapper[4712]: I0130 18:54:20.509943 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:20 crc kubenswrapper[4712]: I0130 18:54:20.710374 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nmgj"] Jan 30 18:54:22 crc kubenswrapper[4712]: I0130 18:54:22.461892 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4nmgj" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="registry-server" containerID="cri-o://fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f" gracePeriod=2 Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.015474 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.128421 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-utilities\") pod \"f19793b0-28f8-4ee8-be35-416ec5fa353f\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.128680 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ksbl\" (UniqueName: \"kubernetes.io/projected/f19793b0-28f8-4ee8-be35-416ec5fa353f-kube-api-access-5ksbl\") pod \"f19793b0-28f8-4ee8-be35-416ec5fa353f\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.128718 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-catalog-content\") pod \"f19793b0-28f8-4ee8-be35-416ec5fa353f\" (UID: \"f19793b0-28f8-4ee8-be35-416ec5fa353f\") " Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.136515 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-utilities" (OuterVolumeSpecName: "utilities") pod "f19793b0-28f8-4ee8-be35-416ec5fa353f" (UID: "f19793b0-28f8-4ee8-be35-416ec5fa353f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.144249 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f19793b0-28f8-4ee8-be35-416ec5fa353f-kube-api-access-5ksbl" (OuterVolumeSpecName: "kube-api-access-5ksbl") pod "f19793b0-28f8-4ee8-be35-416ec5fa353f" (UID: "f19793b0-28f8-4ee8-be35-416ec5fa353f"). InnerVolumeSpecName "kube-api-access-5ksbl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.240468 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.240498 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ksbl\" (UniqueName: \"kubernetes.io/projected/f19793b0-28f8-4ee8-be35-416ec5fa353f-kube-api-access-5ksbl\") on node \"crc\" DevicePath \"\"" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.256707 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f19793b0-28f8-4ee8-be35-416ec5fa353f" (UID: "f19793b0-28f8-4ee8-be35-416ec5fa353f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.342595 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f19793b0-28f8-4ee8-be35-416ec5fa353f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.472087 4712 generic.go:334] "Generic (PLEG): container finished" podID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerID="fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f" exitCode=0 Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.472306 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerDied","Data":"fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f"} Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.472344 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4nmgj" event={"ID":"f19793b0-28f8-4ee8-be35-416ec5fa353f","Type":"ContainerDied","Data":"fa7b06d185fed20d8716c7d51826718687ab3fe42c193763401934ac1dd15a1d"} Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.472365 4712 scope.go:117] "RemoveContainer" containerID="fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.472495 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4nmgj" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.505071 4712 scope.go:117] "RemoveContainer" containerID="25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.510574 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4nmgj"] Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.522515 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4nmgj"] Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.525433 4712 scope.go:117] "RemoveContainer" containerID="bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.561367 4712 scope.go:117] "RemoveContainer" containerID="fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f" Jan 30 18:54:23 crc kubenswrapper[4712]: E0130 18:54:23.561920 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f\": container with ID starting with fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f not found: ID does not exist" containerID="fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.561962 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f"} err="failed to get container status \"fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f\": rpc error: code = NotFound desc = could not find container \"fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f\": container with ID starting with fcd9cbec2a677d9dc9d707898ba339c4c06c7dce32fcc3c906e4ca4225fd429f not found: ID does not exist" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.561989 4712 scope.go:117] "RemoveContainer" containerID="25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f" Jan 30 18:54:23 crc kubenswrapper[4712]: E0130 18:54:23.562286 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f\": container with ID starting with 25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f not found: ID does not exist" containerID="25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.562319 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f"} err="failed to get container status \"25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f\": rpc error: code = NotFound desc = could not find container \"25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f\": container with ID starting with 25a0c44a4d958e1d515cea45ccc500546567dc034bb94e3c65decd20ef63b77f not found: ID does not exist" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.562341 4712 scope.go:117] "RemoveContainer" containerID="bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9" Jan 30 18:54:23 crc kubenswrapper[4712]: E0130 18:54:23.562579 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9\": container with ID starting with bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9 not found: ID does not exist" containerID="bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.562604 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9"} err="failed to get container status \"bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9\": rpc error: code = NotFound desc = could not find container \"bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9\": container with ID starting with bf525457111050a9557489e8d8777d4340edca8c3cec48fcbf8d9a8a412e96d9 not found: ID does not exist" Jan 30 18:54:23 crc kubenswrapper[4712]: I0130 18:54:23.810187 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" path="/var/lib/kubelet/pods/f19793b0-28f8-4ee8-be35-416ec5fa353f/volumes" Jan 30 18:54:27 crc kubenswrapper[4712]: I0130 18:54:27.801172 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:54:27 crc kubenswrapper[4712]: E0130 18:54:27.802219 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:54:42 crc kubenswrapper[4712]: I0130 18:54:42.800089 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:54:42 crc kubenswrapper[4712]: E0130 18:54:42.800640 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:54:55 crc kubenswrapper[4712]: I0130 18:54:55.799529 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:54:55 crc kubenswrapper[4712]: E0130 18:54:55.800202 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:55:09 crc kubenswrapper[4712]: I0130 18:55:09.800418 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:55:09 crc kubenswrapper[4712]: E0130 18:55:09.801289 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:55:19 crc kubenswrapper[4712]: I0130 18:55:19.130846 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb9570ef-5465-43b3-8747-1d546402c98a" containerID="80c7e0d069af7f6959273f8c62cb40ca5256edc75ad08919c53e699d752fc44d" exitCode=1 Jan 30 18:55:19 crc kubenswrapper[4712]: I0130 18:55:19.131366 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"eb9570ef-5465-43b3-8747-1d546402c98a","Type":"ContainerDied","Data":"80c7e0d069af7f6959273f8c62cb40ca5256edc75ad08919c53e699d752fc44d"} Jan 30 18:55:20 crc kubenswrapper[4712]: I0130 18:55:20.977154 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.031769 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config-secret\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.031834 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.031883 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-workdir\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.031908 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ssh-key\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.031972 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/eb9570ef-5465-43b3-8747-1d546402c98a-kube-api-access-qjtfz\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.032029 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.032076 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ca-certs\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.032121 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-temporary\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.032219 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-config-data\") pod \"eb9570ef-5465-43b3-8747-1d546402c98a\" (UID: \"eb9570ef-5465-43b3-8747-1d546402c98a\") " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.034924 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-config-data" (OuterVolumeSpecName: "config-data") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.073430 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.092419 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb9570ef-5465-43b3-8747-1d546402c98a-kube-api-access-qjtfz" (OuterVolumeSpecName: "kube-api-access-qjtfz") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "kube-api-access-qjtfz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.100662 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.114839 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.134621 4712 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.134664 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.136152 4712 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.136182 4712 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.136195 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjtfz\" (UniqueName: \"kubernetes.io/projected/eb9570ef-5465-43b3-8747-1d546402c98a-kube-api-access-qjtfz\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.193081 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" event={"ID":"eb9570ef-5465-43b3-8747-1d546402c98a","Type":"ContainerDied","Data":"c568450c8bb696bff7f1ba8c8acf95ff450465198a2d6c19be769ae52f3959c0"} Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.193116 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c568450c8bb696bff7f1ba8c8acf95ff450465198a2d6c19be769ae52f3959c0" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.193182 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s00-multi-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.212963 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.237975 4712 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.283766 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.311022 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.397612 4712 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.397655 4712 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/eb9570ef-5465-43b3-8747-1d546402c98a-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.404612 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.408744 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "eb9570ef-5465-43b3-8747-1d546402c98a" (UID: "eb9570ef-5465-43b3-8747-1d546402c98a"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.458394 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 30 18:55:21 crc kubenswrapper[4712]: E0130 18:55:21.458846 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="extract-content" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.458867 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="extract-content" Jan 30 18:55:21 crc kubenswrapper[4712]: E0130 18:55:21.458895 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="extract-utilities" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.458903 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="extract-utilities" Jan 30 18:55:21 crc kubenswrapper[4712]: E0130 18:55:21.458929 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb9570ef-5465-43b3-8747-1d546402c98a" containerName="tempest-tests-tempest-tests-runner" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.458936 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb9570ef-5465-43b3-8747-1d546402c98a" containerName="tempest-tests-tempest-tests-runner" Jan 30 18:55:21 crc kubenswrapper[4712]: E0130 18:55:21.458954 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="registry-server" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.458961 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="registry-server" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.459205 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f19793b0-28f8-4ee8-be35-416ec5fa353f" containerName="registry-server" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.459235 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb9570ef-5465-43b3-8747-1d546402c98a" containerName="tempest-tests-tempest-tests-runner" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.460038 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.464929 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s1" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.465090 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s1" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.472519 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.499945 4712 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb9570ef-5465-43b3-8747-1d546402c98a-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.500003 4712 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/eb9570ef-5465-43b3-8747-1d546402c98a-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601620 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601690 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601809 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601854 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601894 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601957 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.601988 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.602060 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95qgd\" (UniqueName: \"kubernetes.io/projected/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-kube-api-access-95qgd\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.602085 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704148 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704244 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704280 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704321 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704379 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704405 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704463 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95qgd\" (UniqueName: \"kubernetes.io/projected/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-kube-api-access-95qgd\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704489 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.704537 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.705212 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.705807 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.706566 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-config-data\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.706739 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-openstack-config\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.707833 4712 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.709455 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-ca-certs\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.710107 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-openstack-config-secret\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.712448 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-ssh-key\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.725298 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95qgd\" (UniqueName: \"kubernetes.io/projected/e00d35e2-6792-49c6-b55d-7d7ef6c7611e-kube-api-access-95qgd\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.745927 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest-s01-single-thread-testing\" (UID: \"e00d35e2-6792-49c6-b55d-7d7ef6c7611e\") " pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:21 crc kubenswrapper[4712]: I0130 18:55:21.775628 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" Jan 30 18:55:22 crc kubenswrapper[4712]: I0130 18:55:22.383154 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest-s01-single-thread-testing"] Jan 30 18:55:23 crc kubenswrapper[4712]: I0130 18:55:23.215849 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"e00d35e2-6792-49c6-b55d-7d7ef6c7611e","Type":"ContainerStarted","Data":"4a9024c5187b98766ca73739bd3c243100041ccefb6c93e8108f85c52ed3b54c"} Jan 30 18:55:24 crc kubenswrapper[4712]: I0130 18:55:24.800100 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:55:24 crc kubenswrapper[4712]: E0130 18:55:24.800414 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:55:26 crc kubenswrapper[4712]: I0130 18:55:26.246832 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" event={"ID":"e00d35e2-6792-49c6-b55d-7d7ef6c7611e","Type":"ContainerStarted","Data":"99ba38e76a4a92c677dc2b518afea019ea297bc388514c99b71deb550f520f74"} Jan 30 18:55:26 crc kubenswrapper[4712]: I0130 18:55:26.269630 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest-s01-single-thread-testing" podStartSLOduration=5.26959883 podStartE2EDuration="5.26959883s" podCreationTimestamp="2026-01-30 18:55:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:55:26.261564656 +0000 UTC m=+7263.168574125" watchObservedRunningTime="2026-01-30 18:55:26.26959883 +0000 UTC m=+7263.176608339" Jan 30 18:55:39 crc kubenswrapper[4712]: I0130 18:55:39.802584 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:55:39 crc kubenswrapper[4712]: E0130 18:55:39.803454 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:55:50 crc kubenswrapper[4712]: I0130 18:55:50.800022 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:55:50 crc kubenswrapper[4712]: E0130 18:55:50.801185 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:56:03 crc kubenswrapper[4712]: I0130 18:56:03.809301 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:56:03 crc kubenswrapper[4712]: E0130 18:56:03.811172 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:56:16 crc kubenswrapper[4712]: I0130 18:56:16.801547 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:56:16 crc kubenswrapper[4712]: E0130 18:56:16.802321 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.400229 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-699f8d5569-8nzql"] Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.403407 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.435953 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-699f8d5569-8nzql"] Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.489758 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-config\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.489921 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-internal-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.490003 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-combined-ca-bundle\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.490071 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-httpd-config\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.490100 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-ovndb-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.490170 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhz9\" (UniqueName: \"kubernetes.io/projected/0f499430-9aa9-4145-a241-1d02ee2b2d72-kube-api-access-mqhz9\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.490193 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-public-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.591788 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhz9\" (UniqueName: \"kubernetes.io/projected/0f499430-9aa9-4145-a241-1d02ee2b2d72-kube-api-access-mqhz9\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.591852 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-public-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.591924 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-config\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.591945 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-internal-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.591993 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-combined-ca-bundle\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.592038 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-ovndb-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.592057 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-httpd-config\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.600426 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-internal-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.603161 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-ovndb-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.605483 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-config\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.606395 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-public-tls-certs\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.610520 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-httpd-config\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.611011 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhz9\" (UniqueName: \"kubernetes.io/projected/0f499430-9aa9-4145-a241-1d02ee2b2d72-kube-api-access-mqhz9\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.623235 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-combined-ca-bundle\") pod \"neutron-699f8d5569-8nzql\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:24 crc kubenswrapper[4712]: I0130 18:56:24.730888 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:25 crc kubenswrapper[4712]: I0130 18:56:25.278360 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-699f8d5569-8nzql"] Jan 30 18:56:25 crc kubenswrapper[4712]: I0130 18:56:25.880240 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699f8d5569-8nzql" event={"ID":"0f499430-9aa9-4145-a241-1d02ee2b2d72","Type":"ContainerStarted","Data":"8ea516fd61a78dbd402b121b04ed7cb098ef6bb1f8d1128f32572e10c2bd59be"} Jan 30 18:56:25 crc kubenswrapper[4712]: I0130 18:56:25.880480 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699f8d5569-8nzql" event={"ID":"0f499430-9aa9-4145-a241-1d02ee2b2d72","Type":"ContainerStarted","Data":"d0b99af23d3ed8e3fa063c1c21bdf1ac906fe9e03e2e22f7654685605cd8d065"} Jan 30 18:56:25 crc kubenswrapper[4712]: I0130 18:56:25.880492 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699f8d5569-8nzql" event={"ID":"0f499430-9aa9-4145-a241-1d02ee2b2d72","Type":"ContainerStarted","Data":"2921c67b532ff412653802560f04f1eb0e2bf379cf080fe7e6baa06ade1145e6"} Jan 30 18:56:25 crc kubenswrapper[4712]: I0130 18:56:25.880931 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:25 crc kubenswrapper[4712]: I0130 18:56:25.913253 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-699f8d5569-8nzql" podStartSLOduration=1.913234161 podStartE2EDuration="1.913234161s" podCreationTimestamp="2026-01-30 18:56:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 18:56:25.902045051 +0000 UTC m=+7322.809054540" watchObservedRunningTime="2026-01-30 18:56:25.913234161 +0000 UTC m=+7322.820243640" Jan 30 18:56:27 crc kubenswrapper[4712]: I0130 18:56:27.802178 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:56:27 crc kubenswrapper[4712]: E0130 18:56:27.803083 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:56:38 crc kubenswrapper[4712]: I0130 18:56:38.800621 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:56:38 crc kubenswrapper[4712]: E0130 18:56:38.801775 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:56:52 crc kubenswrapper[4712]: I0130 18:56:52.799847 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:56:52 crc kubenswrapper[4712]: E0130 18:56:52.801181 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:56:54 crc kubenswrapper[4712]: I0130 18:56:54.748142 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-699f8d5569-8nzql" Jan 30 18:56:54 crc kubenswrapper[4712]: I0130 18:56:54.820952 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f6ddf59f7-2n5p6"] Jan 30 18:56:54 crc kubenswrapper[4712]: I0130 18:56:54.821247 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f6ddf59f7-2n5p6" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-api" containerID="cri-o://993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90" gracePeriod=30 Jan 30 18:56:54 crc kubenswrapper[4712]: I0130 18:56:54.821424 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7f6ddf59f7-2n5p6" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-httpd" containerID="cri-o://02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146" gracePeriod=30 Jan 30 18:56:55 crc kubenswrapper[4712]: I0130 18:56:55.134749 4712 generic.go:334] "Generic (PLEG): container finished" podID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerID="02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146" exitCode=0 Jan 30 18:56:55 crc kubenswrapper[4712]: I0130 18:56:55.134960 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f6ddf59f7-2n5p6" event={"ID":"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e","Type":"ContainerDied","Data":"02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146"} Jan 30 18:57:03 crc kubenswrapper[4712]: I0130 18:57:03.812364 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:57:03 crc kubenswrapper[4712]: E0130 18:57:03.813125 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.030196 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113513 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-httpd-config\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113585 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-combined-ca-bundle\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113616 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-internal-tls-certs\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113711 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-config\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113754 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-public-tls-certs\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113840 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-ovndb-tls-certs\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.113887 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnnpc\" (UniqueName: \"kubernetes.io/projected/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-kube-api-access-pnnpc\") pod \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\" (UID: \"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e\") " Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.129716 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-kube-api-access-pnnpc" (OuterVolumeSpecName: "kube-api-access-pnnpc") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "kube-api-access-pnnpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.130708 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.185557 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.205872 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.213969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.215971 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnnpc\" (UniqueName: \"kubernetes.io/projected/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-kube-api-access-pnnpc\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.216009 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.216023 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.216035 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.216046 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.216915 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-config" (OuterVolumeSpecName: "config") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.239172 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" (UID: "1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.327037 4712 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.327068 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e-config\") on node \"crc\" DevicePath \"\"" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.338600 4712 generic.go:334] "Generic (PLEG): container finished" podID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerID="993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90" exitCode=0 Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.338656 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f6ddf59f7-2n5p6" event={"ID":"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e","Type":"ContainerDied","Data":"993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90"} Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.338673 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7f6ddf59f7-2n5p6" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.338688 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7f6ddf59f7-2n5p6" event={"ID":"1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e","Type":"ContainerDied","Data":"d8b36e20b091b5275f37954d946a46ed8f02362382a84cdd36391046b34f3c41"} Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.338708 4712 scope.go:117] "RemoveContainer" containerID="02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.375071 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7f6ddf59f7-2n5p6"] Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.380136 4712 scope.go:117] "RemoveContainer" containerID="993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.383573 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7f6ddf59f7-2n5p6"] Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.411591 4712 scope.go:117] "RemoveContainer" containerID="02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146" Jan 30 18:57:12 crc kubenswrapper[4712]: E0130 18:57:12.411993 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146\": container with ID starting with 02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146 not found: ID does not exist" containerID="02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.412047 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146"} err="failed to get container status \"02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146\": rpc error: code = NotFound desc = could not find container \"02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146\": container with ID starting with 02bf2aea54a017ee1cf4837d85762bd9d20a73eec94a402b4f7134bb8f244146 not found: ID does not exist" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.412077 4712 scope.go:117] "RemoveContainer" containerID="993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90" Jan 30 18:57:12 crc kubenswrapper[4712]: E0130 18:57:12.412342 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90\": container with ID starting with 993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90 not found: ID does not exist" containerID="993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90" Jan 30 18:57:12 crc kubenswrapper[4712]: I0130 18:57:12.412375 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90"} err="failed to get container status \"993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90\": rpc error: code = NotFound desc = could not find container \"993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90\": container with ID starting with 993107844dafc7c19c2354f3296a6ce66c54d9072cb6422cce7daf9efbd86e90 not found: ID does not exist" Jan 30 18:57:13 crc kubenswrapper[4712]: I0130 18:57:13.812820 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" path="/var/lib/kubelet/pods/1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e/volumes" Jan 30 18:57:18 crc kubenswrapper[4712]: I0130 18:57:18.800091 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:57:18 crc kubenswrapper[4712]: E0130 18:57:18.800855 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:57:29 crc kubenswrapper[4712]: I0130 18:57:29.800066 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:57:29 crc kubenswrapper[4712]: E0130 18:57:29.800750 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:57:30 crc kubenswrapper[4712]: E0130 18:57:30.964782 4712 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.246:56432->38.102.83.246:35825: write tcp 38.102.83.246:56432->38.102.83.246:35825: write: broken pipe Jan 30 18:57:40 crc kubenswrapper[4712]: I0130 18:57:40.799560 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:57:40 crc kubenswrapper[4712]: E0130 18:57:40.800432 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:57:53 crc kubenswrapper[4712]: I0130 18:57:53.807664 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:57:53 crc kubenswrapper[4712]: E0130 18:57:53.808342 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:58:05 crc kubenswrapper[4712]: I0130 18:58:05.799609 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:58:05 crc kubenswrapper[4712]: E0130 18:58:05.800527 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:58:20 crc kubenswrapper[4712]: I0130 18:58:20.800639 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:58:20 crc kubenswrapper[4712]: E0130 18:58:20.801966 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:58:31 crc kubenswrapper[4712]: I0130 18:58:31.799846 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:58:31 crc kubenswrapper[4712]: E0130 18:58:31.801062 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 18:58:44 crc kubenswrapper[4712]: I0130 18:58:44.799643 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 18:58:45 crc kubenswrapper[4712]: I0130 18:58:45.480316 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"88ab052542e3ac365696907ac55426c6d26f0a571987ca4ee98769d028e6b8e7"} Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.839221 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-xtbp5"] Jan 30 18:59:05 crc kubenswrapper[4712]: E0130 18:59:05.840344 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-api" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.840377 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-api" Jan 30 18:59:05 crc kubenswrapper[4712]: E0130 18:59:05.840409 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-httpd" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.840415 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-httpd" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.840922 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-api" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.840940 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e7f646d-1be3-4503-8fd7-4c3d1bd13c2e" containerName="neutron-httpd" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.843067 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.853311 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtbp5"] Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.935570 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp2wr\" (UniqueName: \"kubernetes.io/projected/735cc6d1-13aa-4a34-bb49-92a373c29c04-kube-api-access-rp2wr\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.935704 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-utilities\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:05 crc kubenswrapper[4712]: I0130 18:59:05.935739 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-catalog-content\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.037495 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-catalog-content\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.037618 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp2wr\" (UniqueName: \"kubernetes.io/projected/735cc6d1-13aa-4a34-bb49-92a373c29c04-kube-api-access-rp2wr\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.037710 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-utilities\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.038184 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-utilities\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.038555 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-catalog-content\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.058747 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp2wr\" (UniqueName: \"kubernetes.io/projected/735cc6d1-13aa-4a34-bb49-92a373c29c04-kube-api-access-rp2wr\") pod \"redhat-operators-xtbp5\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.170948 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.680294 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-xtbp5"] Jan 30 18:59:06 crc kubenswrapper[4712]: I0130 18:59:06.762170 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerStarted","Data":"d66eb34f1fdb3457dcbfd7a771e7e3e2988f6b4165a529c26aa929116bc8043a"} Jan 30 18:59:07 crc kubenswrapper[4712]: I0130 18:59:07.775136 4712 generic.go:334] "Generic (PLEG): container finished" podID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerID="6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044" exitCode=0 Jan 30 18:59:07 crc kubenswrapper[4712]: I0130 18:59:07.775415 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerDied","Data":"6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044"} Jan 30 18:59:09 crc kubenswrapper[4712]: I0130 18:59:09.808653 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerStarted","Data":"b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f"} Jan 30 18:59:16 crc kubenswrapper[4712]: I0130 18:59:16.867511 4712 generic.go:334] "Generic (PLEG): container finished" podID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerID="b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f" exitCode=0 Jan 30 18:59:16 crc kubenswrapper[4712]: I0130 18:59:16.867616 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerDied","Data":"b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f"} Jan 30 18:59:16 crc kubenswrapper[4712]: I0130 18:59:16.873255 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 18:59:17 crc kubenswrapper[4712]: I0130 18:59:17.883437 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerStarted","Data":"889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736"} Jan 30 18:59:17 crc kubenswrapper[4712]: I0130 18:59:17.915594 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-xtbp5" podStartSLOduration=3.379447869 podStartE2EDuration="12.915564922s" podCreationTimestamp="2026-01-30 18:59:05 +0000 UTC" firstStartedPulling="2026-01-30 18:59:07.778039998 +0000 UTC m=+7484.685049467" lastFinishedPulling="2026-01-30 18:59:17.314157041 +0000 UTC m=+7494.221166520" observedRunningTime="2026-01-30 18:59:17.903943561 +0000 UTC m=+7494.810953090" watchObservedRunningTime="2026-01-30 18:59:17.915564922 +0000 UTC m=+7494.822574431" Jan 30 18:59:26 crc kubenswrapper[4712]: I0130 18:59:26.171439 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:26 crc kubenswrapper[4712]: I0130 18:59:26.171995 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:27 crc kubenswrapper[4712]: I0130 18:59:27.244292 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtbp5" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" probeResult="failure" output=< Jan 30 18:59:27 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:59:27 crc kubenswrapper[4712]: > Jan 30 18:59:37 crc kubenswrapper[4712]: I0130 18:59:37.241188 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtbp5" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" probeResult="failure" output=< Jan 30 18:59:37 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:59:37 crc kubenswrapper[4712]: > Jan 30 18:59:47 crc kubenswrapper[4712]: I0130 18:59:47.232690 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-xtbp5" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" probeResult="failure" output=< Jan 30 18:59:47 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 18:59:47 crc kubenswrapper[4712]: > Jan 30 18:59:56 crc kubenswrapper[4712]: I0130 18:59:56.228382 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:56 crc kubenswrapper[4712]: I0130 18:59:56.302362 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:56 crc kubenswrapper[4712]: I0130 18:59:56.479615 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtbp5"] Jan 30 18:59:57 crc kubenswrapper[4712]: I0130 18:59:57.274336 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-xtbp5" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" containerID="cri-o://889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736" gracePeriod=2 Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.196258 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.248286 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-utilities\") pod \"735cc6d1-13aa-4a34-bb49-92a373c29c04\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.248351 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-catalog-content\") pod \"735cc6d1-13aa-4a34-bb49-92a373c29c04\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.248606 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp2wr\" (UniqueName: \"kubernetes.io/projected/735cc6d1-13aa-4a34-bb49-92a373c29c04-kube-api-access-rp2wr\") pod \"735cc6d1-13aa-4a34-bb49-92a373c29c04\" (UID: \"735cc6d1-13aa-4a34-bb49-92a373c29c04\") " Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.249648 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-utilities" (OuterVolumeSpecName: "utilities") pod "735cc6d1-13aa-4a34-bb49-92a373c29c04" (UID: "735cc6d1-13aa-4a34-bb49-92a373c29c04"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.284053 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/735cc6d1-13aa-4a34-bb49-92a373c29c04-kube-api-access-rp2wr" (OuterVolumeSpecName: "kube-api-access-rp2wr") pod "735cc6d1-13aa-4a34-bb49-92a373c29c04" (UID: "735cc6d1-13aa-4a34-bb49-92a373c29c04"). InnerVolumeSpecName "kube-api-access-rp2wr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.322663 4712 generic.go:334] "Generic (PLEG): container finished" podID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerID="889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736" exitCode=0 Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.322705 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerDied","Data":"889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736"} Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.322731 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-xtbp5" event={"ID":"735cc6d1-13aa-4a34-bb49-92a373c29c04","Type":"ContainerDied","Data":"d66eb34f1fdb3457dcbfd7a771e7e3e2988f6b4165a529c26aa929116bc8043a"} Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.322747 4712 scope.go:117] "RemoveContainer" containerID="889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.322917 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-xtbp5" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.364026 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp2wr\" (UniqueName: \"kubernetes.io/projected/735cc6d1-13aa-4a34-bb49-92a373c29c04-kube-api-access-rp2wr\") on node \"crc\" DevicePath \"\"" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.364059 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.417949 4712 scope.go:117] "RemoveContainer" containerID="b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.461106 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "735cc6d1-13aa-4a34-bb49-92a373c29c04" (UID: "735cc6d1-13aa-4a34-bb49-92a373c29c04"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.466121 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/735cc6d1-13aa-4a34-bb49-92a373c29c04-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.482591 4712 scope.go:117] "RemoveContainer" containerID="6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.514474 4712 scope.go:117] "RemoveContainer" containerID="889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736" Jan 30 18:59:58 crc kubenswrapper[4712]: E0130 18:59:58.515873 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736\": container with ID starting with 889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736 not found: ID does not exist" containerID="889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.515922 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736"} err="failed to get container status \"889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736\": rpc error: code = NotFound desc = could not find container \"889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736\": container with ID starting with 889311dbada71840dd874c13f64cb80dafc7425f6bcef9ba855a59a8a445d736 not found: ID does not exist" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.515968 4712 scope.go:117] "RemoveContainer" containerID="b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f" Jan 30 18:59:58 crc kubenswrapper[4712]: E0130 18:59:58.516496 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f\": container with ID starting with b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f not found: ID does not exist" containerID="b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.516529 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f"} err="failed to get container status \"b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f\": rpc error: code = NotFound desc = could not find container \"b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f\": container with ID starting with b5b7d696fc04c1c2672e56f2601b306020bd3db56b9d7884d0e64641e7ab909f not found: ID does not exist" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.516549 4712 scope.go:117] "RemoveContainer" containerID="6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044" Jan 30 18:59:58 crc kubenswrapper[4712]: E0130 18:59:58.516774 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044\": container with ID starting with 6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044 not found: ID does not exist" containerID="6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.516818 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044"} err="failed to get container status \"6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044\": rpc error: code = NotFound desc = could not find container \"6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044\": container with ID starting with 6702fb868c5a5705b49fcfd6e30648bb102a7ec2d54ea0fa0d97ad7bf30c8044 not found: ID does not exist" Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.666154 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-xtbp5"] Jan 30 18:59:58 crc kubenswrapper[4712]: I0130 18:59:58.675052 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-xtbp5"] Jan 30 18:59:59 crc kubenswrapper[4712]: I0130 18:59:59.812434 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" path="/var/lib/kubelet/pods/735cc6d1-13aa-4a34-bb49-92a373c29c04/volumes" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.223269 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw"] Jan 30 19:00:00 crc kubenswrapper[4712]: E0130 19:00:00.223690 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.223712 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" Jan 30 19:00:00 crc kubenswrapper[4712]: E0130 19:00:00.223743 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="extract-content" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.223750 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="extract-content" Jan 30 19:00:00 crc kubenswrapper[4712]: E0130 19:00:00.223775 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="extract-utilities" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.223782 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="extract-utilities" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.223999 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="735cc6d1-13aa-4a34-bb49-92a373c29c04" containerName="registry-server" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.224778 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.227813 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.229533 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.243338 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw"] Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.307357 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8156dff9-7413-4d68-b0d4-8ce3ad14d768-config-volume\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.307414 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8156dff9-7413-4d68-b0d4-8ce3ad14d768-secret-volume\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.307696 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsqp7\" (UniqueName: \"kubernetes.io/projected/8156dff9-7413-4d68-b0d4-8ce3ad14d768-kube-api-access-tsqp7\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.409678 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8156dff9-7413-4d68-b0d4-8ce3ad14d768-config-volume\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.409757 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8156dff9-7413-4d68-b0d4-8ce3ad14d768-secret-volume\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.409914 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsqp7\" (UniqueName: \"kubernetes.io/projected/8156dff9-7413-4d68-b0d4-8ce3ad14d768-kube-api-access-tsqp7\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.410668 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8156dff9-7413-4d68-b0d4-8ce3ad14d768-config-volume\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.418683 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8156dff9-7413-4d68-b0d4-8ce3ad14d768-secret-volume\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.432590 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsqp7\" (UniqueName: \"kubernetes.io/projected/8156dff9-7413-4d68-b0d4-8ce3ad14d768-kube-api-access-tsqp7\") pod \"collect-profiles-29496660-q2gtw\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:00 crc kubenswrapper[4712]: I0130 19:00:00.553332 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:01 crc kubenswrapper[4712]: I0130 19:00:01.072928 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw"] Jan 30 19:00:01 crc kubenswrapper[4712]: I0130 19:00:01.350884 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" event={"ID":"8156dff9-7413-4d68-b0d4-8ce3ad14d768","Type":"ContainerStarted","Data":"adeddce85d8795cccc57d650c6e18c15dda08423afea21ef37c4e0ee45270764"} Jan 30 19:00:01 crc kubenswrapper[4712]: I0130 19:00:01.350932 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" event={"ID":"8156dff9-7413-4d68-b0d4-8ce3ad14d768","Type":"ContainerStarted","Data":"65439617d535c45b69aecb5a2b6344b4c1ac0964d65972f8ced2a0a76889ee8f"} Jan 30 19:00:01 crc kubenswrapper[4712]: I0130 19:00:01.372638 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" podStartSLOduration=1.372620108 podStartE2EDuration="1.372620108s" podCreationTimestamp="2026-01-30 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 19:00:01.365908135 +0000 UTC m=+7538.272917624" watchObservedRunningTime="2026-01-30 19:00:01.372620108 +0000 UTC m=+7538.279629577" Jan 30 19:00:02 crc kubenswrapper[4712]: I0130 19:00:02.372525 4712 generic.go:334] "Generic (PLEG): container finished" podID="8156dff9-7413-4d68-b0d4-8ce3ad14d768" containerID="adeddce85d8795cccc57d650c6e18c15dda08423afea21ef37c4e0ee45270764" exitCode=0 Jan 30 19:00:02 crc kubenswrapper[4712]: I0130 19:00:02.372756 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" event={"ID":"8156dff9-7413-4d68-b0d4-8ce3ad14d768","Type":"ContainerDied","Data":"adeddce85d8795cccc57d650c6e18c15dda08423afea21ef37c4e0ee45270764"} Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.863007 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.878979 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8156dff9-7413-4d68-b0d4-8ce3ad14d768-secret-volume\") pod \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.879121 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8156dff9-7413-4d68-b0d4-8ce3ad14d768-config-volume\") pod \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.879392 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tsqp7\" (UniqueName: \"kubernetes.io/projected/8156dff9-7413-4d68-b0d4-8ce3ad14d768-kube-api-access-tsqp7\") pod \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\" (UID: \"8156dff9-7413-4d68-b0d4-8ce3ad14d768\") " Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.882287 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8156dff9-7413-4d68-b0d4-8ce3ad14d768-config-volume" (OuterVolumeSpecName: "config-volume") pod "8156dff9-7413-4d68-b0d4-8ce3ad14d768" (UID: "8156dff9-7413-4d68-b0d4-8ce3ad14d768"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.888785 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8156dff9-7413-4d68-b0d4-8ce3ad14d768-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8156dff9-7413-4d68-b0d4-8ce3ad14d768" (UID: "8156dff9-7413-4d68-b0d4-8ce3ad14d768"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.909607 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8156dff9-7413-4d68-b0d4-8ce3ad14d768-kube-api-access-tsqp7" (OuterVolumeSpecName: "kube-api-access-tsqp7") pod "8156dff9-7413-4d68-b0d4-8ce3ad14d768" (UID: "8156dff9-7413-4d68-b0d4-8ce3ad14d768"). InnerVolumeSpecName "kube-api-access-tsqp7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.981585 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tsqp7\" (UniqueName: \"kubernetes.io/projected/8156dff9-7413-4d68-b0d4-8ce3ad14d768-kube-api-access-tsqp7\") on node \"crc\" DevicePath \"\"" Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.981620 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8156dff9-7413-4d68-b0d4-8ce3ad14d768-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:00:03 crc kubenswrapper[4712]: I0130 19:00:03.981632 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8156dff9-7413-4d68-b0d4-8ce3ad14d768-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:00:04 crc kubenswrapper[4712]: I0130 19:00:04.396185 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" event={"ID":"8156dff9-7413-4d68-b0d4-8ce3ad14d768","Type":"ContainerDied","Data":"65439617d535c45b69aecb5a2b6344b4c1ac0964d65972f8ced2a0a76889ee8f"} Jan 30 19:00:04 crc kubenswrapper[4712]: I0130 19:00:04.396497 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65439617d535c45b69aecb5a2b6344b4c1ac0964d65972f8ced2a0a76889ee8f" Jan 30 19:00:04 crc kubenswrapper[4712]: I0130 19:00:04.396567 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw" Jan 30 19:00:04 crc kubenswrapper[4712]: I0130 19:00:04.468773 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb"] Jan 30 19:00:04 crc kubenswrapper[4712]: I0130 19:00:04.475114 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496615-94qjb"] Jan 30 19:00:05 crc kubenswrapper[4712]: I0130 19:00:05.819737 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bd24b2c-f1ed-45ed-a37f-d1c813be0529" path="/var/lib/kubelet/pods/2bd24b2c-f1ed-45ed-a37f-d1c813be0529/volumes" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.178527 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496661-zrl9j"] Jan 30 19:01:00 crc kubenswrapper[4712]: E0130 19:01:00.182532 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8156dff9-7413-4d68-b0d4-8ce3ad14d768" containerName="collect-profiles" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.182587 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8156dff9-7413-4d68-b0d4-8ce3ad14d768" containerName="collect-profiles" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.182870 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8156dff9-7413-4d68-b0d4-8ce3ad14d768" containerName="collect-profiles" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.183705 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.187528 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496661-zrl9j"] Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.249951 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-combined-ca-bundle\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.250038 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-fernet-keys\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.250115 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5x2n\" (UniqueName: \"kubernetes.io/projected/7d73a275-c758-43d4-903a-fa746707b66c-kube-api-access-x5x2n\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.250151 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-config-data\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.352050 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5x2n\" (UniqueName: \"kubernetes.io/projected/7d73a275-c758-43d4-903a-fa746707b66c-kube-api-access-x5x2n\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.352115 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-config-data\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.352276 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-combined-ca-bundle\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.352310 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-fernet-keys\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.360858 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-config-data\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.361786 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-fernet-keys\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.366868 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-combined-ca-bundle\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.373326 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5x2n\" (UniqueName: \"kubernetes.io/projected/7d73a275-c758-43d4-903a-fa746707b66c-kube-api-access-x5x2n\") pod \"keystone-cron-29496661-zrl9j\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:00 crc kubenswrapper[4712]: I0130 19:01:00.522048 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:01 crc kubenswrapper[4712]: I0130 19:01:01.054708 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496661-zrl9j"] Jan 30 19:01:01 crc kubenswrapper[4712]: I0130 19:01:01.172652 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496661-zrl9j" event={"ID":"7d73a275-c758-43d4-903a-fa746707b66c","Type":"ContainerStarted","Data":"d2d8822a9c4da054c2ab980e42c6db83e0113e7c297951c4d4495ce55c14ebf0"} Jan 30 19:01:02 crc kubenswrapper[4712]: I0130 19:01:02.184158 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496661-zrl9j" event={"ID":"7d73a275-c758-43d4-903a-fa746707b66c","Type":"ContainerStarted","Data":"f3aa3760489c3f562c354ddfa0554fa1ce687c266ccc8b75acf38b370f1ce7f0"} Jan 30 19:01:02 crc kubenswrapper[4712]: I0130 19:01:02.208989 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29496661-zrl9j" podStartSLOduration=2.208966917 podStartE2EDuration="2.208966917s" podCreationTimestamp="2026-01-30 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 19:01:02.20371055 +0000 UTC m=+7599.110720029" watchObservedRunningTime="2026-01-30 19:01:02.208966917 +0000 UTC m=+7599.115976386" Jan 30 19:01:03 crc kubenswrapper[4712]: I0130 19:01:03.888822 4712 scope.go:117] "RemoveContainer" containerID="813ba8517838b1246180816cafded484d13b6970804e12f70189d1bc488b7d8a" Jan 30 19:01:05 crc kubenswrapper[4712]: I0130 19:01:05.216261 4712 generic.go:334] "Generic (PLEG): container finished" podID="7d73a275-c758-43d4-903a-fa746707b66c" containerID="f3aa3760489c3f562c354ddfa0554fa1ce687c266ccc8b75acf38b370f1ce7f0" exitCode=0 Jan 30 19:01:05 crc kubenswrapper[4712]: I0130 19:01:05.216359 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496661-zrl9j" event={"ID":"7d73a275-c758-43d4-903a-fa746707b66c","Type":"ContainerDied","Data":"f3aa3760489c3f562c354ddfa0554fa1ce687c266ccc8b75acf38b370f1ce7f0"} Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.271718 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.272192 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.736677 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.787851 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-config-data\") pod \"7d73a275-c758-43d4-903a-fa746707b66c\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.787993 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5x2n\" (UniqueName: \"kubernetes.io/projected/7d73a275-c758-43d4-903a-fa746707b66c-kube-api-access-x5x2n\") pod \"7d73a275-c758-43d4-903a-fa746707b66c\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.788057 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-combined-ca-bundle\") pod \"7d73a275-c758-43d4-903a-fa746707b66c\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.788256 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-fernet-keys\") pod \"7d73a275-c758-43d4-903a-fa746707b66c\" (UID: \"7d73a275-c758-43d4-903a-fa746707b66c\") " Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.793987 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d73a275-c758-43d4-903a-fa746707b66c-kube-api-access-x5x2n" (OuterVolumeSpecName: "kube-api-access-x5x2n") pod "7d73a275-c758-43d4-903a-fa746707b66c" (UID: "7d73a275-c758-43d4-903a-fa746707b66c"). InnerVolumeSpecName "kube-api-access-x5x2n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.794922 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7d73a275-c758-43d4-903a-fa746707b66c" (UID: "7d73a275-c758-43d4-903a-fa746707b66c"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.828139 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7d73a275-c758-43d4-903a-fa746707b66c" (UID: "7d73a275-c758-43d4-903a-fa746707b66c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.865599 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-config-data" (OuterVolumeSpecName: "config-data") pod "7d73a275-c758-43d4-903a-fa746707b66c" (UID: "7d73a275-c758-43d4-903a-fa746707b66c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.890988 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.891025 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5x2n\" (UniqueName: \"kubernetes.io/projected/7d73a275-c758-43d4-903a-fa746707b66c-kube-api-access-x5x2n\") on node \"crc\" DevicePath \"\"" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.891049 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 19:01:06 crc kubenswrapper[4712]: I0130 19:01:06.891061 4712 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7d73a275-c758-43d4-903a-fa746707b66c-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 19:01:07 crc kubenswrapper[4712]: I0130 19:01:07.235874 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496661-zrl9j" Jan 30 19:01:07 crc kubenswrapper[4712]: I0130 19:01:07.235874 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496661-zrl9j" event={"ID":"7d73a275-c758-43d4-903a-fa746707b66c","Type":"ContainerDied","Data":"d2d8822a9c4da054c2ab980e42c6db83e0113e7c297951c4d4495ce55c14ebf0"} Jan 30 19:01:07 crc kubenswrapper[4712]: I0130 19:01:07.235929 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2d8822a9c4da054c2ab980e42c6db83e0113e7c297951c4d4495ce55c14ebf0" Jan 30 19:01:36 crc kubenswrapper[4712]: I0130 19:01:36.271159 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:01:36 crc kubenswrapper[4712]: I0130 19:01:36.271827 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.712178 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sqrws"] Jan 30 19:01:43 crc kubenswrapper[4712]: E0130 19:01:43.713261 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d73a275-c758-43d4-903a-fa746707b66c" containerName="keystone-cron" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.713278 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d73a275-c758-43d4-903a-fa746707b66c" containerName="keystone-cron" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.713498 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d73a275-c758-43d4-903a-fa746707b66c" containerName="keystone-cron" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.715147 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.738350 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqrws"] Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.880595 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-utilities\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.880674 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2nvx\" (UniqueName: \"kubernetes.io/projected/53699239-d4be-42ec-a7d3-611c30b622a8-kube-api-access-s2nvx\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.880712 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-catalog-content\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.982835 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-utilities\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.982942 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2nvx\" (UniqueName: \"kubernetes.io/projected/53699239-d4be-42ec-a7d3-611c30b622a8-kube-api-access-s2nvx\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.982978 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-catalog-content\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.983515 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-utilities\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:43 crc kubenswrapper[4712]: I0130 19:01:43.983581 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-catalog-content\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:44 crc kubenswrapper[4712]: I0130 19:01:44.006507 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2nvx\" (UniqueName: \"kubernetes.io/projected/53699239-d4be-42ec-a7d3-611c30b622a8-kube-api-access-s2nvx\") pod \"certified-operators-sqrws\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:44 crc kubenswrapper[4712]: I0130 19:01:44.036484 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:44 crc kubenswrapper[4712]: I0130 19:01:44.764245 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sqrws"] Jan 30 19:01:45 crc kubenswrapper[4712]: I0130 19:01:45.625771 4712 generic.go:334] "Generic (PLEG): container finished" podID="53699239-d4be-42ec-a7d3-611c30b622a8" containerID="98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e" exitCode=0 Jan 30 19:01:45 crc kubenswrapper[4712]: I0130 19:01:45.625854 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerDied","Data":"98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e"} Jan 30 19:01:45 crc kubenswrapper[4712]: I0130 19:01:45.626142 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerStarted","Data":"f9326f00694b983a3e4254d6168835e7d9a2d9edbf4c5eded4b9abc346587edb"} Jan 30 19:01:46 crc kubenswrapper[4712]: I0130 19:01:46.638307 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerStarted","Data":"5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066"} Jan 30 19:01:48 crc kubenswrapper[4712]: I0130 19:01:48.657846 4712 generic.go:334] "Generic (PLEG): container finished" podID="53699239-d4be-42ec-a7d3-611c30b622a8" containerID="5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066" exitCode=0 Jan 30 19:01:48 crc kubenswrapper[4712]: I0130 19:01:48.657926 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerDied","Data":"5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066"} Jan 30 19:01:49 crc kubenswrapper[4712]: I0130 19:01:49.673043 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerStarted","Data":"b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f"} Jan 30 19:01:49 crc kubenswrapper[4712]: I0130 19:01:49.702667 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sqrws" podStartSLOduration=3.198599317 podStartE2EDuration="6.702650441s" podCreationTimestamp="2026-01-30 19:01:43 +0000 UTC" firstStartedPulling="2026-01-30 19:01:45.628456983 +0000 UTC m=+7642.535466492" lastFinishedPulling="2026-01-30 19:01:49.132508147 +0000 UTC m=+7646.039517616" observedRunningTime="2026-01-30 19:01:49.696501792 +0000 UTC m=+7646.603511281" watchObservedRunningTime="2026-01-30 19:01:49.702650441 +0000 UTC m=+7646.609659910" Jan 30 19:01:54 crc kubenswrapper[4712]: I0130 19:01:54.037048 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:54 crc kubenswrapper[4712]: I0130 19:01:54.038366 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:01:55 crc kubenswrapper[4712]: I0130 19:01:55.084654 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sqrws" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="registry-server" probeResult="failure" output=< Jan 30 19:01:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:01:55 crc kubenswrapper[4712]: > Jan 30 19:02:04 crc kubenswrapper[4712]: I0130 19:02:04.097999 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:02:04 crc kubenswrapper[4712]: I0130 19:02:04.169402 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:02:04 crc kubenswrapper[4712]: I0130 19:02:04.344960 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqrws"] Jan 30 19:02:05 crc kubenswrapper[4712]: I0130 19:02:05.830979 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sqrws" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="registry-server" containerID="cri-o://b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f" gracePeriod=2 Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.271539 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.271993 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.272062 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.273335 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"88ab052542e3ac365696907ac55426c6d26f0a571987ca4ee98769d028e6b8e7"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.273423 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://88ab052542e3ac365696907ac55426c6d26f0a571987ca4ee98769d028e6b8e7" gracePeriod=600 Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.432654 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.562930 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2nvx\" (UniqueName: \"kubernetes.io/projected/53699239-d4be-42ec-a7d3-611c30b622a8-kube-api-access-s2nvx\") pod \"53699239-d4be-42ec-a7d3-611c30b622a8\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.563040 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-utilities\") pod \"53699239-d4be-42ec-a7d3-611c30b622a8\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.563137 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-catalog-content\") pod \"53699239-d4be-42ec-a7d3-611c30b622a8\" (UID: \"53699239-d4be-42ec-a7d3-611c30b622a8\") " Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.564472 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-utilities" (OuterVolumeSpecName: "utilities") pod "53699239-d4be-42ec-a7d3-611c30b622a8" (UID: "53699239-d4be-42ec-a7d3-611c30b622a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.571000 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53699239-d4be-42ec-a7d3-611c30b622a8-kube-api-access-s2nvx" (OuterVolumeSpecName: "kube-api-access-s2nvx") pod "53699239-d4be-42ec-a7d3-611c30b622a8" (UID: "53699239-d4be-42ec-a7d3-611c30b622a8"). InnerVolumeSpecName "kube-api-access-s2nvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.613714 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "53699239-d4be-42ec-a7d3-611c30b622a8" (UID: "53699239-d4be-42ec-a7d3-611c30b622a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.665843 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s2nvx\" (UniqueName: \"kubernetes.io/projected/53699239-d4be-42ec-a7d3-611c30b622a8-kube-api-access-s2nvx\") on node \"crc\" DevicePath \"\"" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.665876 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.665885 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/53699239-d4be-42ec-a7d3-611c30b622a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.843167 4712 generic.go:334] "Generic (PLEG): container finished" podID="53699239-d4be-42ec-a7d3-611c30b622a8" containerID="b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f" exitCode=0 Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.843237 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerDied","Data":"b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f"} Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.843264 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sqrws" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.844473 4712 scope.go:117] "RemoveContainer" containerID="b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.844323 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sqrws" event={"ID":"53699239-d4be-42ec-a7d3-611c30b622a8","Type":"ContainerDied","Data":"f9326f00694b983a3e4254d6168835e7d9a2d9edbf4c5eded4b9abc346587edb"} Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.866113 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="88ab052542e3ac365696907ac55426c6d26f0a571987ca4ee98769d028e6b8e7" exitCode=0 Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.866159 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"88ab052542e3ac365696907ac55426c6d26f0a571987ca4ee98769d028e6b8e7"} Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.866190 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd"} Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.945875 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sqrws"] Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.946932 4712 scope.go:117] "RemoveContainer" containerID="5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066" Jan 30 19:02:06 crc kubenswrapper[4712]: I0130 19:02:06.957862 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sqrws"] Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.024061 4712 scope.go:117] "RemoveContainer" containerID="98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.073780 4712 scope.go:117] "RemoveContainer" containerID="b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f" Jan 30 19:02:07 crc kubenswrapper[4712]: E0130 19:02:07.096838 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f\": container with ID starting with b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f not found: ID does not exist" containerID="b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.096942 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f"} err="failed to get container status \"b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f\": rpc error: code = NotFound desc = could not find container \"b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f\": container with ID starting with b855f072d1187fadcb5890079aee2061a6b1a715a4d8dbaf44b954910ba6e75f not found: ID does not exist" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.096975 4712 scope.go:117] "RemoveContainer" containerID="5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066" Jan 30 19:02:07 crc kubenswrapper[4712]: E0130 19:02:07.106451 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066\": container with ID starting with 5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066 not found: ID does not exist" containerID="5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.106519 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066"} err="failed to get container status \"5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066\": rpc error: code = NotFound desc = could not find container \"5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066\": container with ID starting with 5b804ddc2c9421db5077c2bcab22cfe37f9cdcec5c9b895e3810fc9e221f8066 not found: ID does not exist" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.106545 4712 scope.go:117] "RemoveContainer" containerID="98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e" Jan 30 19:02:07 crc kubenswrapper[4712]: E0130 19:02:07.107352 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e\": container with ID starting with 98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e not found: ID does not exist" containerID="98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.107369 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e"} err="failed to get container status \"98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e\": rpc error: code = NotFound desc = could not find container \"98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e\": container with ID starting with 98a73dd438ff83566d9ec00efd5196aaa5d1bf120ce8939bf70876554b2c797e not found: ID does not exist" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.107385 4712 scope.go:117] "RemoveContainer" containerID="46b29d6e1cd074134b6233a7b1c1425865a73376deb7780b7bd6be5ba0507c52" Jan 30 19:02:07 crc kubenswrapper[4712]: I0130 19:02:07.822429 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" path="/var/lib/kubelet/pods/53699239-d4be-42ec-a7d3-611c30b622a8/volumes" Jan 30 19:04:06 crc kubenswrapper[4712]: I0130 19:04:06.271235 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:04:06 crc kubenswrapper[4712]: I0130 19:04:06.271839 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:04:36 crc kubenswrapper[4712]: I0130 19:04:36.271277 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:04:36 crc kubenswrapper[4712]: I0130 19:04:36.272391 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:05:06 crc kubenswrapper[4712]: I0130 19:05:06.271684 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:05:06 crc kubenswrapper[4712]: I0130 19:05:06.273907 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:05:06 crc kubenswrapper[4712]: I0130 19:05:06.273974 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:05:06 crc kubenswrapper[4712]: I0130 19:05:06.274927 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:05:06 crc kubenswrapper[4712]: I0130 19:05:06.274990 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" gracePeriod=600 Jan 30 19:05:06 crc kubenswrapper[4712]: E0130 19:05:06.435695 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:05:07 crc kubenswrapper[4712]: I0130 19:05:07.025938 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" exitCode=0 Jan 30 19:05:07 crc kubenswrapper[4712]: I0130 19:05:07.025989 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd"} Jan 30 19:05:07 crc kubenswrapper[4712]: I0130 19:05:07.026026 4712 scope.go:117] "RemoveContainer" containerID="88ab052542e3ac365696907ac55426c6d26f0a571987ca4ee98769d028e6b8e7" Jan 30 19:05:07 crc kubenswrapper[4712]: I0130 19:05:07.027444 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:05:07 crc kubenswrapper[4712]: E0130 19:05:07.028331 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.031336 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g9dqs"] Jan 30 19:05:18 crc kubenswrapper[4712]: E0130 19:05:18.034238 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="extract-utilities" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.034422 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="extract-utilities" Jan 30 19:05:18 crc kubenswrapper[4712]: E0130 19:05:18.034601 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="extract-content" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.034688 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="extract-content" Jan 30 19:05:18 crc kubenswrapper[4712]: E0130 19:05:18.034792 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="registry-server" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.034917 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="registry-server" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.035349 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="53699239-d4be-42ec-a7d3-611c30b622a8" containerName="registry-server" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.040318 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.045402 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9dqs"] Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.190100 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-utilities\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.190166 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-catalog-content\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.190206 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snjgs\" (UniqueName: \"kubernetes.io/projected/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-kube-api-access-snjgs\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.291461 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snjgs\" (UniqueName: \"kubernetes.io/projected/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-kube-api-access-snjgs\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.291626 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-utilities\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.291665 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-catalog-content\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.292126 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-utilities\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.292161 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-catalog-content\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.336167 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snjgs\" (UniqueName: \"kubernetes.io/projected/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-kube-api-access-snjgs\") pod \"community-operators-g9dqs\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:18 crc kubenswrapper[4712]: I0130 19:05:18.410830 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:19 crc kubenswrapper[4712]: I0130 19:05:19.048658 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g9dqs"] Jan 30 19:05:19 crc kubenswrapper[4712]: I0130 19:05:19.155520 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerStarted","Data":"ce84a9878241cca4833be19bab322af1f0aa2ffa9f4bc4657b835504634120a9"} Jan 30 19:05:20 crc kubenswrapper[4712]: I0130 19:05:20.175930 4712 generic.go:334] "Generic (PLEG): container finished" podID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerID="450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8" exitCode=0 Jan 30 19:05:20 crc kubenswrapper[4712]: I0130 19:05:20.176262 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerDied","Data":"450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8"} Jan 30 19:05:20 crc kubenswrapper[4712]: I0130 19:05:20.178705 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:05:20 crc kubenswrapper[4712]: I0130 19:05:20.801362 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:05:20 crc kubenswrapper[4712]: E0130 19:05:20.802554 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:05:21 crc kubenswrapper[4712]: I0130 19:05:21.186717 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerStarted","Data":"8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59"} Jan 30 19:05:23 crc kubenswrapper[4712]: I0130 19:05:23.210984 4712 generic.go:334] "Generic (PLEG): container finished" podID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerID="8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59" exitCode=0 Jan 30 19:05:23 crc kubenswrapper[4712]: I0130 19:05:23.211039 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerDied","Data":"8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59"} Jan 30 19:05:24 crc kubenswrapper[4712]: I0130 19:05:24.228743 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerStarted","Data":"de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881"} Jan 30 19:05:24 crc kubenswrapper[4712]: I0130 19:05:24.256281 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g9dqs" podStartSLOduration=2.7912990559999997 podStartE2EDuration="6.256262957s" podCreationTimestamp="2026-01-30 19:05:18 +0000 UTC" firstStartedPulling="2026-01-30 19:05:20.178454534 +0000 UTC m=+7857.085464013" lastFinishedPulling="2026-01-30 19:05:23.643418445 +0000 UTC m=+7860.550427914" observedRunningTime="2026-01-30 19:05:24.247721591 +0000 UTC m=+7861.154731060" watchObservedRunningTime="2026-01-30 19:05:24.256262957 +0000 UTC m=+7861.163272426" Jan 30 19:05:28 crc kubenswrapper[4712]: I0130 19:05:28.411836 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:28 crc kubenswrapper[4712]: I0130 19:05:28.412158 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:28 crc kubenswrapper[4712]: I0130 19:05:28.481510 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:29 crc kubenswrapper[4712]: I0130 19:05:29.360413 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:29 crc kubenswrapper[4712]: I0130 19:05:29.426703 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9dqs"] Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.298695 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g9dqs" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="registry-server" containerID="cri-o://de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881" gracePeriod=2 Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.767479 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.810848 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-catalog-content\") pod \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.811033 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-snjgs\" (UniqueName: \"kubernetes.io/projected/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-kube-api-access-snjgs\") pod \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.811175 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-utilities\") pod \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\" (UID: \"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01\") " Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.811993 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-utilities" (OuterVolumeSpecName: "utilities") pod "8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" (UID: "8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.816598 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.834270 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-kube-api-access-snjgs" (OuterVolumeSpecName: "kube-api-access-snjgs") pod "8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" (UID: "8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01"). InnerVolumeSpecName "kube-api-access-snjgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.881238 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" (UID: "8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.919103 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:05:31 crc kubenswrapper[4712]: I0130 19:05:31.919152 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-snjgs\" (UniqueName: \"kubernetes.io/projected/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01-kube-api-access-snjgs\") on node \"crc\" DevicePath \"\"" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.315307 4712 generic.go:334] "Generic (PLEG): container finished" podID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerID="de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881" exitCode=0 Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.315357 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerDied","Data":"de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881"} Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.315389 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g9dqs" event={"ID":"8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01","Type":"ContainerDied","Data":"ce84a9878241cca4833be19bab322af1f0aa2ffa9f4bc4657b835504634120a9"} Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.315410 4712 scope.go:117] "RemoveContainer" containerID="de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.315554 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g9dqs" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.343938 4712 scope.go:117] "RemoveContainer" containerID="8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.374525 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g9dqs"] Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.379904 4712 scope.go:117] "RemoveContainer" containerID="450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.385976 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g9dqs"] Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.447310 4712 scope.go:117] "RemoveContainer" containerID="de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881" Jan 30 19:05:32 crc kubenswrapper[4712]: E0130 19:05:32.447760 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881\": container with ID starting with de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881 not found: ID does not exist" containerID="de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.447811 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881"} err="failed to get container status \"de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881\": rpc error: code = NotFound desc = could not find container \"de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881\": container with ID starting with de23146f9a16c608b124cc2c8fbd3f349485829a9e012d69e0be45d5e90c5881 not found: ID does not exist" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.447837 4712 scope.go:117] "RemoveContainer" containerID="8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59" Jan 30 19:05:32 crc kubenswrapper[4712]: E0130 19:05:32.448291 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59\": container with ID starting with 8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59 not found: ID does not exist" containerID="8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.448324 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59"} err="failed to get container status \"8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59\": rpc error: code = NotFound desc = could not find container \"8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59\": container with ID starting with 8d541181cac4ad955f88fc3635ca2a65851e2772687df48af5e310d3046e2b59 not found: ID does not exist" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.448342 4712 scope.go:117] "RemoveContainer" containerID="450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8" Jan 30 19:05:32 crc kubenswrapper[4712]: E0130 19:05:32.448704 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8\": container with ID starting with 450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8 not found: ID does not exist" containerID="450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8" Jan 30 19:05:32 crc kubenswrapper[4712]: I0130 19:05:32.448757 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8"} err="failed to get container status \"450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8\": rpc error: code = NotFound desc = could not find container \"450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8\": container with ID starting with 450155c805c52e2462792db1e2918e5abcd2879b1ac74c9564ba4ae96a5115b8 not found: ID does not exist" Jan 30 19:05:33 crc kubenswrapper[4712]: I0130 19:05:33.836123 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" path="/var/lib/kubelet/pods/8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01/volumes" Jan 30 19:05:35 crc kubenswrapper[4712]: I0130 19:05:35.801311 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:05:35 crc kubenswrapper[4712]: E0130 19:05:35.801928 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:05:46 crc kubenswrapper[4712]: I0130 19:05:46.800074 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:05:46 crc kubenswrapper[4712]: E0130 19:05:46.801795 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:05:58 crc kubenswrapper[4712]: I0130 19:05:58.800266 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:05:58 crc kubenswrapper[4712]: E0130 19:05:58.801581 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:06:10 crc kubenswrapper[4712]: I0130 19:06:10.800852 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:06:10 crc kubenswrapper[4712]: E0130 19:06:10.801720 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:06:23 crc kubenswrapper[4712]: I0130 19:06:23.813697 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:06:23 crc kubenswrapper[4712]: E0130 19:06:23.816986 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:06:37 crc kubenswrapper[4712]: I0130 19:06:37.800489 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:06:37 crc kubenswrapper[4712]: E0130 19:06:37.801222 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:06:51 crc kubenswrapper[4712]: I0130 19:06:51.799645 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:06:51 crc kubenswrapper[4712]: E0130 19:06:51.800468 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:07:03 crc kubenswrapper[4712]: I0130 19:07:03.807435 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:07:03 crc kubenswrapper[4712]: E0130 19:07:03.808357 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:07:16 crc kubenswrapper[4712]: I0130 19:07:16.800067 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:07:16 crc kubenswrapper[4712]: E0130 19:07:16.800914 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:07:27 crc kubenswrapper[4712]: I0130 19:07:27.800454 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:07:27 crc kubenswrapper[4712]: E0130 19:07:27.801479 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:07:42 crc kubenswrapper[4712]: I0130 19:07:42.799446 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:07:42 crc kubenswrapper[4712]: E0130 19:07:42.800172 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:07:57 crc kubenswrapper[4712]: I0130 19:07:57.800897 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:07:57 crc kubenswrapper[4712]: E0130 19:07:57.801866 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.372191 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cp678"] Jan 30 19:08:01 crc kubenswrapper[4712]: E0130 19:08:01.373189 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="registry-server" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.373205 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="registry-server" Jan 30 19:08:01 crc kubenswrapper[4712]: E0130 19:08:01.373248 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="extract-content" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.373255 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="extract-content" Jan 30 19:08:01 crc kubenswrapper[4712]: E0130 19:08:01.373273 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="extract-utilities" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.373281 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="extract-utilities" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.373500 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a2ccd45-3c8e-49d0-a1a7-0f1c1dc86f01" containerName="registry-server" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.375683 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.404856 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp678"] Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.493020 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-catalog-content\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.493515 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lbsz\" (UniqueName: \"kubernetes.io/projected/eb08aec5-58ee-406a-8f04-60b2ae88597e-kube-api-access-8lbsz\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.493571 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-utilities\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.595140 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-catalog-content\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.595485 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8lbsz\" (UniqueName: \"kubernetes.io/projected/eb08aec5-58ee-406a-8f04-60b2ae88597e-kube-api-access-8lbsz\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.595626 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-catalog-content\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.595641 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-utilities\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.596178 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-utilities\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.624170 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8lbsz\" (UniqueName: \"kubernetes.io/projected/eb08aec5-58ee-406a-8f04-60b2ae88597e-kube-api-access-8lbsz\") pod \"redhat-marketplace-cp678\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:01 crc kubenswrapper[4712]: I0130 19:08:01.701317 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:02 crc kubenswrapper[4712]: W0130 19:08:02.199842 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb08aec5_58ee_406a_8f04_60b2ae88597e.slice/crio-e57329572c5dac3a9f20e6517ccc9960e75acb87faeb5cd255d0084b1fd12e0a WatchSource:0}: Error finding container e57329572c5dac3a9f20e6517ccc9960e75acb87faeb5cd255d0084b1fd12e0a: Status 404 returned error can't find the container with id e57329572c5dac3a9f20e6517ccc9960e75acb87faeb5cd255d0084b1fd12e0a Jan 30 19:08:02 crc kubenswrapper[4712]: I0130 19:08:02.209330 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp678"] Jan 30 19:08:02 crc kubenswrapper[4712]: I0130 19:08:02.993667 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerID="17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d" exitCode=0 Jan 30 19:08:02 crc kubenswrapper[4712]: I0130 19:08:02.993764 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp678" event={"ID":"eb08aec5-58ee-406a-8f04-60b2ae88597e","Type":"ContainerDied","Data":"17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d"} Jan 30 19:08:02 crc kubenswrapper[4712]: I0130 19:08:02.995006 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp678" event={"ID":"eb08aec5-58ee-406a-8f04-60b2ae88597e","Type":"ContainerStarted","Data":"e57329572c5dac3a9f20e6517ccc9960e75acb87faeb5cd255d0084b1fd12e0a"} Jan 30 19:08:05 crc kubenswrapper[4712]: I0130 19:08:05.026460 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerID="e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665" exitCode=0 Jan 30 19:08:05 crc kubenswrapper[4712]: I0130 19:08:05.026527 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp678" event={"ID":"eb08aec5-58ee-406a-8f04-60b2ae88597e","Type":"ContainerDied","Data":"e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665"} Jan 30 19:08:06 crc kubenswrapper[4712]: I0130 19:08:06.038695 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp678" event={"ID":"eb08aec5-58ee-406a-8f04-60b2ae88597e","Type":"ContainerStarted","Data":"77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe"} Jan 30 19:08:06 crc kubenswrapper[4712]: I0130 19:08:06.055911 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cp678" podStartSLOduration=2.612744716 podStartE2EDuration="5.055893875s" podCreationTimestamp="2026-01-30 19:08:01 +0000 UTC" firstStartedPulling="2026-01-30 19:08:02.996978804 +0000 UTC m=+8019.903988313" lastFinishedPulling="2026-01-30 19:08:05.440128013 +0000 UTC m=+8022.347137472" observedRunningTime="2026-01-30 19:08:06.055357622 +0000 UTC m=+8022.962367081" watchObservedRunningTime="2026-01-30 19:08:06.055893875 +0000 UTC m=+8022.962903344" Jan 30 19:08:10 crc kubenswrapper[4712]: I0130 19:08:10.800202 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:08:10 crc kubenswrapper[4712]: E0130 19:08:10.802126 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:08:11 crc kubenswrapper[4712]: I0130 19:08:11.702353 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:11 crc kubenswrapper[4712]: I0130 19:08:11.703166 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:11 crc kubenswrapper[4712]: I0130 19:08:11.766052 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:12 crc kubenswrapper[4712]: I0130 19:08:12.150163 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:12 crc kubenswrapper[4712]: I0130 19:08:12.208774 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp678"] Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.121157 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cp678" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="registry-server" containerID="cri-o://77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe" gracePeriod=2 Jan 30 19:08:14 crc kubenswrapper[4712]: E0130 19:08:14.367902 4712 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb08aec5_58ee_406a_8f04_60b2ae88597e.slice/crio-77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb08aec5_58ee_406a_8f04_60b2ae88597e.slice/crio-conmon-77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe.scope\": RecentStats: unable to find data in memory cache]" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.623118 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.716615 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-utilities\") pod \"eb08aec5-58ee-406a-8f04-60b2ae88597e\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.716778 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lbsz\" (UniqueName: \"kubernetes.io/projected/eb08aec5-58ee-406a-8f04-60b2ae88597e-kube-api-access-8lbsz\") pod \"eb08aec5-58ee-406a-8f04-60b2ae88597e\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.716865 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-catalog-content\") pod \"eb08aec5-58ee-406a-8f04-60b2ae88597e\" (UID: \"eb08aec5-58ee-406a-8f04-60b2ae88597e\") " Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.718003 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-utilities" (OuterVolumeSpecName: "utilities") pod "eb08aec5-58ee-406a-8f04-60b2ae88597e" (UID: "eb08aec5-58ee-406a-8f04-60b2ae88597e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.728770 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb08aec5-58ee-406a-8f04-60b2ae88597e-kube-api-access-8lbsz" (OuterVolumeSpecName: "kube-api-access-8lbsz") pod "eb08aec5-58ee-406a-8f04-60b2ae88597e" (UID: "eb08aec5-58ee-406a-8f04-60b2ae88597e"). InnerVolumeSpecName "kube-api-access-8lbsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.736255 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8lbsz\" (UniqueName: \"kubernetes.io/projected/eb08aec5-58ee-406a-8f04-60b2ae88597e-kube-api-access-8lbsz\") on node \"crc\" DevicePath \"\"" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.736295 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.753738 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb08aec5-58ee-406a-8f04-60b2ae88597e" (UID: "eb08aec5-58ee-406a-8f04-60b2ae88597e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:08:14 crc kubenswrapper[4712]: I0130 19:08:14.838147 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb08aec5-58ee-406a-8f04-60b2ae88597e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.130466 4712 generic.go:334] "Generic (PLEG): container finished" podID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerID="77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe" exitCode=0 Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.130511 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp678" event={"ID":"eb08aec5-58ee-406a-8f04-60b2ae88597e","Type":"ContainerDied","Data":"77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe"} Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.130547 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cp678" event={"ID":"eb08aec5-58ee-406a-8f04-60b2ae88597e","Type":"ContainerDied","Data":"e57329572c5dac3a9f20e6517ccc9960e75acb87faeb5cd255d0084b1fd12e0a"} Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.130551 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cp678" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.130569 4712 scope.go:117] "RemoveContainer" containerID="77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.165770 4712 scope.go:117] "RemoveContainer" containerID="e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.184591 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp678"] Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.192156 4712 scope.go:117] "RemoveContainer" containerID="17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.204452 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cp678"] Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.238702 4712 scope.go:117] "RemoveContainer" containerID="77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe" Jan 30 19:08:15 crc kubenswrapper[4712]: E0130 19:08:15.239626 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe\": container with ID starting with 77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe not found: ID does not exist" containerID="77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.239727 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe"} err="failed to get container status \"77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe\": rpc error: code = NotFound desc = could not find container \"77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe\": container with ID starting with 77705100ab533f05c3b82a166bcf3b31fcf85d391622d02d2dff0b4ba083cafe not found: ID does not exist" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.239771 4712 scope.go:117] "RemoveContainer" containerID="e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665" Jan 30 19:08:15 crc kubenswrapper[4712]: E0130 19:08:15.240353 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665\": container with ID starting with e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665 not found: ID does not exist" containerID="e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.240432 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665"} err="failed to get container status \"e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665\": rpc error: code = NotFound desc = could not find container \"e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665\": container with ID starting with e3fad97c230159b8db558526bd272bdb2e334e1e22c7d36e6dba04b0cccce665 not found: ID does not exist" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.240462 4712 scope.go:117] "RemoveContainer" containerID="17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d" Jan 30 19:08:15 crc kubenswrapper[4712]: E0130 19:08:15.240741 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d\": container with ID starting with 17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d not found: ID does not exist" containerID="17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.240872 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d"} err="failed to get container status \"17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d\": rpc error: code = NotFound desc = could not find container \"17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d\": container with ID starting with 17e7b0cccf9a5173119b5a8963f0adf445c5378122f7ab160abebe0469b0d90d not found: ID does not exist" Jan 30 19:08:15 crc kubenswrapper[4712]: I0130 19:08:15.815844 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" path="/var/lib/kubelet/pods/eb08aec5-58ee-406a-8f04-60b2ae88597e/volumes" Jan 30 19:08:24 crc kubenswrapper[4712]: I0130 19:08:24.799684 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:08:24 crc kubenswrapper[4712]: E0130 19:08:24.800290 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:08:38 crc kubenswrapper[4712]: I0130 19:08:38.800791 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:08:38 crc kubenswrapper[4712]: E0130 19:08:38.802232 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:08:50 crc kubenswrapper[4712]: I0130 19:08:50.800605 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:08:50 crc kubenswrapper[4712]: E0130 19:08:50.801647 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:09:05 crc kubenswrapper[4712]: I0130 19:09:05.800062 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:09:05 crc kubenswrapper[4712]: E0130 19:09:05.800887 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:09:20 crc kubenswrapper[4712]: I0130 19:09:20.800020 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:09:20 crc kubenswrapper[4712]: E0130 19:09:20.800954 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:09:34 crc kubenswrapper[4712]: I0130 19:09:34.799502 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:09:34 crc kubenswrapper[4712]: E0130 19:09:34.800372 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:09:45 crc kubenswrapper[4712]: I0130 19:09:45.799940 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:09:45 crc kubenswrapper[4712]: E0130 19:09:45.801246 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:09:58 crc kubenswrapper[4712]: I0130 19:09:58.799973 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:09:58 crc kubenswrapper[4712]: E0130 19:09:58.800686 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:10:09 crc kubenswrapper[4712]: I0130 19:10:09.799716 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:10:10 crc kubenswrapper[4712]: I0130 19:10:10.349055 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"0a1644e7958883534d01788ae171ff3fc1121ba5a7eb61b16fb7c21ba730d3d1"} Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.306973 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hjvsc"] Jan 30 19:10:29 crc kubenswrapper[4712]: E0130 19:10:29.308141 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="registry-server" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.308160 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="registry-server" Jan 30 19:10:29 crc kubenswrapper[4712]: E0130 19:10:29.308175 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="extract-content" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.308183 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="extract-content" Jan 30 19:10:29 crc kubenswrapper[4712]: E0130 19:10:29.308212 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="extract-utilities" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.308222 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="extract-utilities" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.308455 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb08aec5-58ee-406a-8f04-60b2ae88597e" containerName="registry-server" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.310964 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.324181 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hjvsc"] Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.438948 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-utilities\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.439003 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b862s\" (UniqueName: \"kubernetes.io/projected/88d28b0b-5dbc-452c-aa82-3e78104bed32-kube-api-access-b862s\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.439048 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-catalog-content\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.541595 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-utilities\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.541666 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b862s\" (UniqueName: \"kubernetes.io/projected/88d28b0b-5dbc-452c-aa82-3e78104bed32-kube-api-access-b862s\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.541731 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-catalog-content\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.542212 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-utilities\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.542243 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-catalog-content\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.563782 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b862s\" (UniqueName: \"kubernetes.io/projected/88d28b0b-5dbc-452c-aa82-3e78104bed32-kube-api-access-b862s\") pod \"redhat-operators-hjvsc\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:29 crc kubenswrapper[4712]: I0130 19:10:29.631605 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:30 crc kubenswrapper[4712]: I0130 19:10:30.168320 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hjvsc"] Jan 30 19:10:30 crc kubenswrapper[4712]: I0130 19:10:30.543642 4712 generic.go:334] "Generic (PLEG): container finished" podID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerID="20b50f32c3511a9b223733111d9f37fd676397395c67215c04ac69d969c68e2d" exitCode=0 Jan 30 19:10:30 crc kubenswrapper[4712]: I0130 19:10:30.543837 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerDied","Data":"20b50f32c3511a9b223733111d9f37fd676397395c67215c04ac69d969c68e2d"} Jan 30 19:10:30 crc kubenswrapper[4712]: I0130 19:10:30.543921 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerStarted","Data":"72edb8adce29db8e4a893fb8500ae2612080b05cfd208fbe763a113d77159b1e"} Jan 30 19:10:30 crc kubenswrapper[4712]: I0130 19:10:30.545898 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:10:32 crc kubenswrapper[4712]: I0130 19:10:32.588235 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerStarted","Data":"3128796f317eadc1a164fd0d10c0d9cb6cdfdee40e6d18e7fbc06c3b0f76c5da"} Jan 30 19:10:37 crc kubenswrapper[4712]: I0130 19:10:37.644151 4712 generic.go:334] "Generic (PLEG): container finished" podID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerID="3128796f317eadc1a164fd0d10c0d9cb6cdfdee40e6d18e7fbc06c3b0f76c5da" exitCode=0 Jan 30 19:10:37 crc kubenswrapper[4712]: I0130 19:10:37.644301 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerDied","Data":"3128796f317eadc1a164fd0d10c0d9cb6cdfdee40e6d18e7fbc06c3b0f76c5da"} Jan 30 19:10:38 crc kubenswrapper[4712]: I0130 19:10:38.662125 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerStarted","Data":"759045a8366e120648ca2dc992502de2cbabc1f158fadffc9baac528ccebc8ed"} Jan 30 19:10:38 crc kubenswrapper[4712]: I0130 19:10:38.701393 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hjvsc" podStartSLOduration=2.161624037 podStartE2EDuration="9.701377177s" podCreationTimestamp="2026-01-30 19:10:29 +0000 UTC" firstStartedPulling="2026-01-30 19:10:30.545624679 +0000 UTC m=+8167.452634148" lastFinishedPulling="2026-01-30 19:10:38.085377819 +0000 UTC m=+8174.992387288" observedRunningTime="2026-01-30 19:10:38.690927525 +0000 UTC m=+8175.597936994" watchObservedRunningTime="2026-01-30 19:10:38.701377177 +0000 UTC m=+8175.608386646" Jan 30 19:10:39 crc kubenswrapper[4712]: I0130 19:10:39.632394 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:39 crc kubenswrapper[4712]: I0130 19:10:39.632775 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:10:40 crc kubenswrapper[4712]: I0130 19:10:40.686899 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hjvsc" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" probeResult="failure" output=< Jan 30 19:10:40 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:10:40 crc kubenswrapper[4712]: > Jan 30 19:10:50 crc kubenswrapper[4712]: I0130 19:10:50.723475 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hjvsc" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" probeResult="failure" output=< Jan 30 19:10:50 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:10:50 crc kubenswrapper[4712]: > Jan 30 19:11:00 crc kubenswrapper[4712]: I0130 19:11:00.698914 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hjvsc" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" probeResult="failure" output=< Jan 30 19:11:00 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:11:00 crc kubenswrapper[4712]: > Jan 30 19:11:10 crc kubenswrapper[4712]: I0130 19:11:10.700192 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hjvsc" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" probeResult="failure" output=< Jan 30 19:11:10 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:11:10 crc kubenswrapper[4712]: > Jan 30 19:11:19 crc kubenswrapper[4712]: I0130 19:11:19.689059 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:11:19 crc kubenswrapper[4712]: I0130 19:11:19.763156 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:11:19 crc kubenswrapper[4712]: I0130 19:11:19.933464 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hjvsc"] Jan 30 19:11:21 crc kubenswrapper[4712]: I0130 19:11:21.118350 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hjvsc" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" containerID="cri-o://759045a8366e120648ca2dc992502de2cbabc1f158fadffc9baac528ccebc8ed" gracePeriod=2 Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.215241 4712 generic.go:334] "Generic (PLEG): container finished" podID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerID="759045a8366e120648ca2dc992502de2cbabc1f158fadffc9baac528ccebc8ed" exitCode=0 Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.215360 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerDied","Data":"759045a8366e120648ca2dc992502de2cbabc1f158fadffc9baac528ccebc8ed"} Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.596293 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.709123 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-utilities\") pod \"88d28b0b-5dbc-452c-aa82-3e78104bed32\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.709192 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b862s\" (UniqueName: \"kubernetes.io/projected/88d28b0b-5dbc-452c-aa82-3e78104bed32-kube-api-access-b862s\") pod \"88d28b0b-5dbc-452c-aa82-3e78104bed32\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.709248 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-catalog-content\") pod \"88d28b0b-5dbc-452c-aa82-3e78104bed32\" (UID: \"88d28b0b-5dbc-452c-aa82-3e78104bed32\") " Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.710098 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-utilities" (OuterVolumeSpecName: "utilities") pod "88d28b0b-5dbc-452c-aa82-3e78104bed32" (UID: "88d28b0b-5dbc-452c-aa82-3e78104bed32"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.719370 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88d28b0b-5dbc-452c-aa82-3e78104bed32-kube-api-access-b862s" (OuterVolumeSpecName: "kube-api-access-b862s") pod "88d28b0b-5dbc-452c-aa82-3e78104bed32" (UID: "88d28b0b-5dbc-452c-aa82-3e78104bed32"). InnerVolumeSpecName "kube-api-access-b862s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.812737 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.816139 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b862s\" (UniqueName: \"kubernetes.io/projected/88d28b0b-5dbc-452c-aa82-3e78104bed32-kube-api-access-b862s\") on node \"crc\" DevicePath \"\"" Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.845921 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88d28b0b-5dbc-452c-aa82-3e78104bed32" (UID: "88d28b0b-5dbc-452c-aa82-3e78104bed32"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:11:22 crc kubenswrapper[4712]: I0130 19:11:22.918216 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88d28b0b-5dbc-452c-aa82-3e78104bed32-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.227949 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hjvsc" event={"ID":"88d28b0b-5dbc-452c-aa82-3e78104bed32","Type":"ContainerDied","Data":"72edb8adce29db8e4a893fb8500ae2612080b05cfd208fbe763a113d77159b1e"} Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.228009 4712 scope.go:117] "RemoveContainer" containerID="759045a8366e120648ca2dc992502de2cbabc1f158fadffc9baac528ccebc8ed" Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.228975 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hjvsc" Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.266912 4712 scope.go:117] "RemoveContainer" containerID="3128796f317eadc1a164fd0d10c0d9cb6cdfdee40e6d18e7fbc06c3b0f76c5da" Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.268592 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hjvsc"] Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.280844 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hjvsc"] Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.288405 4712 scope.go:117] "RemoveContainer" containerID="20b50f32c3511a9b223733111d9f37fd676397395c67215c04ac69d969c68e2d" Jan 30 19:11:23 crc kubenswrapper[4712]: I0130 19:11:23.810740 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" path="/var/lib/kubelet/pods/88d28b0b-5dbc-452c-aa82-3e78104bed32/volumes" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.567078 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-njlw7"] Jan 30 19:11:51 crc kubenswrapper[4712]: E0130 19:11:51.567914 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.567926 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" Jan 30 19:11:51 crc kubenswrapper[4712]: E0130 19:11:51.567946 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="extract-utilities" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.567954 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="extract-utilities" Jan 30 19:11:51 crc kubenswrapper[4712]: E0130 19:11:51.567974 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="extract-content" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.567980 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="extract-content" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.568161 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="88d28b0b-5dbc-452c-aa82-3e78104bed32" containerName="registry-server" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.569451 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.576941 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shj5n\" (UniqueName: \"kubernetes.io/projected/9e76cb50-55cf-4db9-9e52-3592e7ca4837-kube-api-access-shj5n\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.577060 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-catalog-content\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.577093 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-utilities\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.588081 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njlw7"] Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.678572 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-catalog-content\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.678623 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-utilities\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.678726 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-shj5n\" (UniqueName: \"kubernetes.io/projected/9e76cb50-55cf-4db9-9e52-3592e7ca4837-kube-api-access-shj5n\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.679313 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-utilities\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.679626 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-catalog-content\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.697960 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-shj5n\" (UniqueName: \"kubernetes.io/projected/9e76cb50-55cf-4db9-9e52-3592e7ca4837-kube-api-access-shj5n\") pod \"certified-operators-njlw7\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:51 crc kubenswrapper[4712]: I0130 19:11:51.907734 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:11:52 crc kubenswrapper[4712]: I0130 19:11:52.387565 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-njlw7"] Jan 30 19:11:52 crc kubenswrapper[4712]: I0130 19:11:52.539849 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerStarted","Data":"bd6d6711b9c516215fe408bc7620324341767f864afe97372145fe0348a5532a"} Jan 30 19:11:53 crc kubenswrapper[4712]: I0130 19:11:53.550816 4712 generic.go:334] "Generic (PLEG): container finished" podID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerID="baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846" exitCode=0 Jan 30 19:11:53 crc kubenswrapper[4712]: I0130 19:11:53.550975 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerDied","Data":"baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846"} Jan 30 19:11:54 crc kubenswrapper[4712]: I0130 19:11:54.561297 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerStarted","Data":"6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74"} Jan 30 19:11:56 crc kubenswrapper[4712]: I0130 19:11:56.579334 4712 generic.go:334] "Generic (PLEG): container finished" podID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerID="6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74" exitCode=0 Jan 30 19:11:56 crc kubenswrapper[4712]: I0130 19:11:56.579379 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerDied","Data":"6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74"} Jan 30 19:11:57 crc kubenswrapper[4712]: I0130 19:11:57.590687 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerStarted","Data":"fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9"} Jan 30 19:11:57 crc kubenswrapper[4712]: I0130 19:11:57.613155 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-njlw7" podStartSLOduration=3.144025629 podStartE2EDuration="6.613138081s" podCreationTimestamp="2026-01-30 19:11:51 +0000 UTC" firstStartedPulling="2026-01-30 19:11:53.554172463 +0000 UTC m=+8250.461181932" lastFinishedPulling="2026-01-30 19:11:57.023284905 +0000 UTC m=+8253.930294384" observedRunningTime="2026-01-30 19:11:57.610538218 +0000 UTC m=+8254.517547687" watchObservedRunningTime="2026-01-30 19:11:57.613138081 +0000 UTC m=+8254.520147550" Jan 30 19:12:01 crc kubenswrapper[4712]: I0130 19:12:01.908269 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:12:01 crc kubenswrapper[4712]: I0130 19:12:01.908893 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:12:01 crc kubenswrapper[4712]: I0130 19:12:01.994026 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:12:02 crc kubenswrapper[4712]: I0130 19:12:02.696882 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:12:02 crc kubenswrapper[4712]: I0130 19:12:02.767193 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njlw7"] Jan 30 19:12:04 crc kubenswrapper[4712]: I0130 19:12:04.670140 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-njlw7" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="registry-server" containerID="cri-o://fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9" gracePeriod=2 Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.146638 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.223078 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-utilities\") pod \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.223181 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-catalog-content\") pod \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.223217 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-shj5n\" (UniqueName: \"kubernetes.io/projected/9e76cb50-55cf-4db9-9e52-3592e7ca4837-kube-api-access-shj5n\") pod \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\" (UID: \"9e76cb50-55cf-4db9-9e52-3592e7ca4837\") " Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.223712 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-utilities" (OuterVolumeSpecName: "utilities") pod "9e76cb50-55cf-4db9-9e52-3592e7ca4837" (UID: "9e76cb50-55cf-4db9-9e52-3592e7ca4837"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.224028 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.231023 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e76cb50-55cf-4db9-9e52-3592e7ca4837-kube-api-access-shj5n" (OuterVolumeSpecName: "kube-api-access-shj5n") pod "9e76cb50-55cf-4db9-9e52-3592e7ca4837" (UID: "9e76cb50-55cf-4db9-9e52-3592e7ca4837"). InnerVolumeSpecName "kube-api-access-shj5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.282240 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9e76cb50-55cf-4db9-9e52-3592e7ca4837" (UID: "9e76cb50-55cf-4db9-9e52-3592e7ca4837"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.326716 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9e76cb50-55cf-4db9-9e52-3592e7ca4837-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.326776 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-shj5n\" (UniqueName: \"kubernetes.io/projected/9e76cb50-55cf-4db9-9e52-3592e7ca4837-kube-api-access-shj5n\") on node \"crc\" DevicePath \"\"" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.681764 4712 generic.go:334] "Generic (PLEG): container finished" podID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerID="fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9" exitCode=0 Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.681873 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-njlw7" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.681886 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerDied","Data":"fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9"} Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.682891 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-njlw7" event={"ID":"9e76cb50-55cf-4db9-9e52-3592e7ca4837","Type":"ContainerDied","Data":"bd6d6711b9c516215fe408bc7620324341767f864afe97372145fe0348a5532a"} Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.682942 4712 scope.go:117] "RemoveContainer" containerID="fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.714249 4712 scope.go:117] "RemoveContainer" containerID="6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.743456 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-njlw7"] Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.758165 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-njlw7"] Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.760225 4712 scope.go:117] "RemoveContainer" containerID="baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.808969 4712 scope.go:117] "RemoveContainer" containerID="fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9" Jan 30 19:12:05 crc kubenswrapper[4712]: E0130 19:12:05.809569 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9\": container with ID starting with fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9 not found: ID does not exist" containerID="fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.809610 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9"} err="failed to get container status \"fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9\": rpc error: code = NotFound desc = could not find container \"fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9\": container with ID starting with fbf59af93942e82f8ecd2df05219e2507e97a72a97acda513f1e4d436f6176d9 not found: ID does not exist" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.809635 4712 scope.go:117] "RemoveContainer" containerID="6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74" Jan 30 19:12:05 crc kubenswrapper[4712]: E0130 19:12:05.809915 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74\": container with ID starting with 6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74 not found: ID does not exist" containerID="6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.809938 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74"} err="failed to get container status \"6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74\": rpc error: code = NotFound desc = could not find container \"6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74\": container with ID starting with 6d5d1fd8b90f28234d27981f4604d79074eb6083ef2fb8859d32f4320aff5e74 not found: ID does not exist" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.809952 4712 scope.go:117] "RemoveContainer" containerID="baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846" Jan 30 19:12:05 crc kubenswrapper[4712]: E0130 19:12:05.810597 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846\": container with ID starting with baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846 not found: ID does not exist" containerID="baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.810623 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846"} err="failed to get container status \"baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846\": rpc error: code = NotFound desc = could not find container \"baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846\": container with ID starting with baf545decf7573ffa163347110d1fc85ec7d88b1788aadecab0e6d01b998d846 not found: ID does not exist" Jan 30 19:12:05 crc kubenswrapper[4712]: I0130 19:12:05.812823 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" path="/var/lib/kubelet/pods/9e76cb50-55cf-4db9-9e52-3592e7ca4837/volumes" Jan 30 19:12:36 crc kubenswrapper[4712]: I0130 19:12:36.270943 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:12:36 crc kubenswrapper[4712]: I0130 19:12:36.273971 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:13:06 crc kubenswrapper[4712]: I0130 19:13:06.271586 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:13:06 crc kubenswrapper[4712]: I0130 19:13:06.272351 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.271510 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.272896 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.272950 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.273841 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a1644e7958883534d01788ae171ff3fc1121ba5a7eb61b16fb7c21ba730d3d1"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.273913 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://0a1644e7958883534d01788ae171ff3fc1121ba5a7eb61b16fb7c21ba730d3d1" gracePeriod=600 Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.742408 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="0a1644e7958883534d01788ae171ff3fc1121ba5a7eb61b16fb7c21ba730d3d1" exitCode=0 Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.743031 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"0a1644e7958883534d01788ae171ff3fc1121ba5a7eb61b16fb7c21ba730d3d1"} Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.744013 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f"} Jan 30 19:13:36 crc kubenswrapper[4712]: I0130 19:13:36.744667 4712 scope.go:117] "RemoveContainer" containerID="34acba06beb62928f94a8bb7527f7617f2f6979eb8c058280c48ac4a45c2a2fd" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.158122 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z"] Jan 30 19:15:00 crc kubenswrapper[4712]: E0130 19:15:00.159004 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="extract-utilities" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.159016 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="extract-utilities" Jan 30 19:15:00 crc kubenswrapper[4712]: E0130 19:15:00.159034 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="extract-content" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.159040 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="extract-content" Jan 30 19:15:00 crc kubenswrapper[4712]: E0130 19:15:00.159062 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="registry-server" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.159069 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="registry-server" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.159226 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9e76cb50-55cf-4db9-9e52-3592e7ca4837" containerName="registry-server" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.159849 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.165889 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.167372 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.225326 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z"] Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.306454 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdd2t\" (UniqueName: \"kubernetes.io/projected/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-kube-api-access-xdd2t\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.306511 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-secret-volume\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.306549 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-config-volume\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.408103 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdd2t\" (UniqueName: \"kubernetes.io/projected/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-kube-api-access-xdd2t\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.408401 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-secret-volume\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.408432 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-config-volume\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.409292 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-config-volume\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.414348 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-secret-volume\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.426753 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdd2t\" (UniqueName: \"kubernetes.io/projected/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-kube-api-access-xdd2t\") pod \"collect-profiles-29496675-szb4z\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:00 crc kubenswrapper[4712]: I0130 19:15:00.522082 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:01 crc kubenswrapper[4712]: I0130 19:15:01.005690 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z"] Jan 30 19:15:01 crc kubenswrapper[4712]: I0130 19:15:01.619698 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" event={"ID":"c9b1e620-fced-4dcd-b6eb-ab76e32c0301","Type":"ContainerStarted","Data":"869427332ef2363a2c04b24728f9598dcdc0e4710dc5dd6ef1f84432a3497074"} Jan 30 19:15:01 crc kubenswrapper[4712]: I0130 19:15:01.620061 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" event={"ID":"c9b1e620-fced-4dcd-b6eb-ab76e32c0301","Type":"ContainerStarted","Data":"b919d3ded3bdc45ca041bcc5c5bd7c0bc580c592736b542f40d4ed9b35415357"} Jan 30 19:15:01 crc kubenswrapper[4712]: I0130 19:15:01.637666 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" podStartSLOduration=1.6376522599999999 podStartE2EDuration="1.63765226s" podCreationTimestamp="2026-01-30 19:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 19:15:01.63305777 +0000 UTC m=+8438.540067239" watchObservedRunningTime="2026-01-30 19:15:01.63765226 +0000 UTC m=+8438.544661729" Jan 30 19:15:02 crc kubenswrapper[4712]: I0130 19:15:02.630160 4712 generic.go:334] "Generic (PLEG): container finished" podID="c9b1e620-fced-4dcd-b6eb-ab76e32c0301" containerID="869427332ef2363a2c04b24728f9598dcdc0e4710dc5dd6ef1f84432a3497074" exitCode=0 Jan 30 19:15:02 crc kubenswrapper[4712]: I0130 19:15:02.630480 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" event={"ID":"c9b1e620-fced-4dcd-b6eb-ab76e32c0301","Type":"ContainerDied","Data":"869427332ef2363a2c04b24728f9598dcdc0e4710dc5dd6ef1f84432a3497074"} Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.017332 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.075415 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-config-volume\") pod \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.075471 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdd2t\" (UniqueName: \"kubernetes.io/projected/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-kube-api-access-xdd2t\") pod \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.075569 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-secret-volume\") pod \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\" (UID: \"c9b1e620-fced-4dcd-b6eb-ab76e32c0301\") " Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.077166 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-config-volume" (OuterVolumeSpecName: "config-volume") pod "c9b1e620-fced-4dcd-b6eb-ab76e32c0301" (UID: "c9b1e620-fced-4dcd-b6eb-ab76e32c0301"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.084610 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-kube-api-access-xdd2t" (OuterVolumeSpecName: "kube-api-access-xdd2t") pod "c9b1e620-fced-4dcd-b6eb-ab76e32c0301" (UID: "c9b1e620-fced-4dcd-b6eb-ab76e32c0301"). InnerVolumeSpecName "kube-api-access-xdd2t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.084657 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c9b1e620-fced-4dcd-b6eb-ab76e32c0301" (UID: "c9b1e620-fced-4dcd-b6eb-ab76e32c0301"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.177973 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdd2t\" (UniqueName: \"kubernetes.io/projected/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-kube-api-access-xdd2t\") on node \"crc\" DevicePath \"\"" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.178006 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.178015 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c9b1e620-fced-4dcd-b6eb-ab76e32c0301-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.660081 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" event={"ID":"c9b1e620-fced-4dcd-b6eb-ab76e32c0301","Type":"ContainerDied","Data":"b919d3ded3bdc45ca041bcc5c5bd7c0bc580c592736b542f40d4ed9b35415357"} Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.660117 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b919d3ded3bdc45ca041bcc5c5bd7c0bc580c592736b542f40d4ed9b35415357" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.660207 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z" Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.747207 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7"] Jan 30 19:15:04 crc kubenswrapper[4712]: I0130 19:15:04.776956 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496630-2xjc7"] Jan 30 19:15:05 crc kubenswrapper[4712]: I0130 19:15:05.814398 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7016a028-3d59-4c19-af25-90d601a927fe" path="/var/lib/kubelet/pods/7016a028-3d59-4c19-af25-90d601a927fe/volumes" Jan 30 19:15:36 crc kubenswrapper[4712]: I0130 19:15:36.271377 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:15:36 crc kubenswrapper[4712]: I0130 19:15:36.272089 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.548959 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-754xh"] Jan 30 19:15:51 crc kubenswrapper[4712]: E0130 19:15:51.550221 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9b1e620-fced-4dcd-b6eb-ab76e32c0301" containerName="collect-profiles" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.550245 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9b1e620-fced-4dcd-b6eb-ab76e32c0301" containerName="collect-profiles" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.550573 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9b1e620-fced-4dcd-b6eb-ab76e32c0301" containerName="collect-profiles" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.560061 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.584973 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-754xh"] Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.704146 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-kube-api-access-f8p9z\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.704519 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-utilities\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.704604 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-catalog-content\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.807283 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-catalog-content\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.807533 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-kube-api-access-f8p9z\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.807582 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-utilities\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.807770 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-catalog-content\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.807984 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-utilities\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.826321 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-kube-api-access-f8p9z\") pod \"community-operators-754xh\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:51 crc kubenswrapper[4712]: I0130 19:15:51.928513 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:15:52 crc kubenswrapper[4712]: I0130 19:15:52.552503 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-754xh"] Jan 30 19:15:53 crc kubenswrapper[4712]: I0130 19:15:53.119008 4712 generic.go:334] "Generic (PLEG): container finished" podID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerID="0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7" exitCode=0 Jan 30 19:15:53 crc kubenswrapper[4712]: I0130 19:15:53.119116 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerDied","Data":"0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7"} Jan 30 19:15:53 crc kubenswrapper[4712]: I0130 19:15:53.119723 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerStarted","Data":"5d287273b6089388094c1f719ac12246e968fc4a6fd7d0ddd9c98e3bad98ff24"} Jan 30 19:15:53 crc kubenswrapper[4712]: I0130 19:15:53.122575 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:15:53 crc kubenswrapper[4712]: E0130 19:15:53.417014 4712 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.246:56254->38.102.83.246:35825: read tcp 38.102.83.246:56254->38.102.83.246:35825: read: connection reset by peer Jan 30 19:15:54 crc kubenswrapper[4712]: I0130 19:15:54.133321 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerStarted","Data":"e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c"} Jan 30 19:15:56 crc kubenswrapper[4712]: I0130 19:15:56.151548 4712 generic.go:334] "Generic (PLEG): container finished" podID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerID="e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c" exitCode=0 Jan 30 19:15:56 crc kubenswrapper[4712]: I0130 19:15:56.152045 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerDied","Data":"e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c"} Jan 30 19:15:57 crc kubenswrapper[4712]: I0130 19:15:57.163536 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerStarted","Data":"a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30"} Jan 30 19:16:01 crc kubenswrapper[4712]: I0130 19:16:01.929037 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:16:01 crc kubenswrapper[4712]: I0130 19:16:01.929535 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:16:02 crc kubenswrapper[4712]: I0130 19:16:02.984976 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-754xh" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="registry-server" probeResult="failure" output=< Jan 30 19:16:02 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:16:02 crc kubenswrapper[4712]: > Jan 30 19:16:04 crc kubenswrapper[4712]: I0130 19:16:04.427844 4712 scope.go:117] "RemoveContainer" containerID="3141a92b7f63f2d4fdb2a8084d09bb950a9dc6f02f6dbe982010ac4cc721e7bf" Jan 30 19:16:06 crc kubenswrapper[4712]: I0130 19:16:06.270917 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:16:06 crc kubenswrapper[4712]: I0130 19:16:06.271291 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:16:11 crc kubenswrapper[4712]: I0130 19:16:11.978561 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:16:12 crc kubenswrapper[4712]: I0130 19:16:12.006891 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-754xh" podStartSLOduration=17.512843705 podStartE2EDuration="21.006867989s" podCreationTimestamp="2026-01-30 19:15:51 +0000 UTC" firstStartedPulling="2026-01-30 19:15:53.122251619 +0000 UTC m=+8490.029261098" lastFinishedPulling="2026-01-30 19:15:56.616275903 +0000 UTC m=+8493.523285382" observedRunningTime="2026-01-30 19:15:57.181247657 +0000 UTC m=+8494.088257126" watchObservedRunningTime="2026-01-30 19:16:12.006867989 +0000 UTC m=+8508.913877458" Jan 30 19:16:12 crc kubenswrapper[4712]: I0130 19:16:12.039197 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:16:12 crc kubenswrapper[4712]: I0130 19:16:12.221783 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-754xh"] Jan 30 19:16:13 crc kubenswrapper[4712]: I0130 19:16:13.325200 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-754xh" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="registry-server" containerID="cri-o://a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30" gracePeriod=2 Jan 30 19:16:13 crc kubenswrapper[4712]: I0130 19:16:13.894492 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:16:13 crc kubenswrapper[4712]: I0130 19:16:13.986359 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-utilities\") pod \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " Jan 30 19:16:13 crc kubenswrapper[4712]: I0130 19:16:13.986421 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-catalog-content\") pod \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " Jan 30 19:16:13 crc kubenswrapper[4712]: I0130 19:16:13.986558 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-kube-api-access-f8p9z\") pod \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\" (UID: \"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0\") " Jan 30 19:16:13 crc kubenswrapper[4712]: I0130 19:16:13.987271 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-utilities" (OuterVolumeSpecName: "utilities") pod "3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" (UID: "3bd3856d-f80f-4f1b-9c66-35cd400ab0d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.012908 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-kube-api-access-f8p9z" (OuterVolumeSpecName: "kube-api-access-f8p9z") pod "3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" (UID: "3bd3856d-f80f-4f1b-9c66-35cd400ab0d0"). InnerVolumeSpecName "kube-api-access-f8p9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.074373 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" (UID: "3bd3856d-f80f-4f1b-9c66-35cd400ab0d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.088685 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.088720 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.088734 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0-kube-api-access-f8p9z\") on node \"crc\" DevicePath \"\"" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.336496 4712 generic.go:334] "Generic (PLEG): container finished" podID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerID="a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30" exitCode=0 Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.336552 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerDied","Data":"a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30"} Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.336570 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-754xh" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.336588 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-754xh" event={"ID":"3bd3856d-f80f-4f1b-9c66-35cd400ab0d0","Type":"ContainerDied","Data":"5d287273b6089388094c1f719ac12246e968fc4a6fd7d0ddd9c98e3bad98ff24"} Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.336615 4712 scope.go:117] "RemoveContainer" containerID="a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.365187 4712 scope.go:117] "RemoveContainer" containerID="e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.374227 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-754xh"] Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.382077 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-754xh"] Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.397104 4712 scope.go:117] "RemoveContainer" containerID="0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.460280 4712 scope.go:117] "RemoveContainer" containerID="a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30" Jan 30 19:16:14 crc kubenswrapper[4712]: E0130 19:16:14.460819 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30\": container with ID starting with a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30 not found: ID does not exist" containerID="a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.460865 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30"} err="failed to get container status \"a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30\": rpc error: code = NotFound desc = could not find container \"a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30\": container with ID starting with a9631d35a66f42576d9f02a03897597067f6fb057253857474a7957084798d30 not found: ID does not exist" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.460893 4712 scope.go:117] "RemoveContainer" containerID="e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c" Jan 30 19:16:14 crc kubenswrapper[4712]: E0130 19:16:14.463911 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c\": container with ID starting with e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c not found: ID does not exist" containerID="e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.463942 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c"} err="failed to get container status \"e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c\": rpc error: code = NotFound desc = could not find container \"e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c\": container with ID starting with e55e562663ad58ce44f333d3c0fc80cf30c2e1f19adc7696ac4785aa1d4afb4c not found: ID does not exist" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.463959 4712 scope.go:117] "RemoveContainer" containerID="0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7" Jan 30 19:16:14 crc kubenswrapper[4712]: E0130 19:16:14.464233 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7\": container with ID starting with 0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7 not found: ID does not exist" containerID="0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7" Jan 30 19:16:14 crc kubenswrapper[4712]: I0130 19:16:14.464257 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7"} err="failed to get container status \"0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7\": rpc error: code = NotFound desc = could not find container \"0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7\": container with ID starting with 0b494bc4eed59e307e49a02301e6082e8af83b9391efcd7f1720840146e926d7 not found: ID does not exist" Jan 30 19:16:15 crc kubenswrapper[4712]: I0130 19:16:15.810268 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" path="/var/lib/kubelet/pods/3bd3856d-f80f-4f1b-9c66-35cd400ab0d0/volumes" Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.271621 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.273894 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.274642 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.276056 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.276440 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" gracePeriod=600 Jan 30 19:16:36 crc kubenswrapper[4712]: E0130 19:16:36.409173 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.611510 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" exitCode=0 Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.611550 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f"} Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.611970 4712 scope.go:117] "RemoveContainer" containerID="0a1644e7958883534d01788ae171ff3fc1121ba5a7eb61b16fb7c21ba730d3d1" Jan 30 19:16:36 crc kubenswrapper[4712]: I0130 19:16:36.612903 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:16:36 crc kubenswrapper[4712]: E0130 19:16:36.613357 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:16:47 crc kubenswrapper[4712]: I0130 19:16:47.800481 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:16:47 crc kubenswrapper[4712]: E0130 19:16:47.801618 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:17:01 crc kubenswrapper[4712]: I0130 19:17:01.801086 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:17:01 crc kubenswrapper[4712]: E0130 19:17:01.802661 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:17:12 crc kubenswrapper[4712]: I0130 19:17:12.800040 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:17:12 crc kubenswrapper[4712]: E0130 19:17:12.801054 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:17:27 crc kubenswrapper[4712]: I0130 19:17:27.799530 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:17:27 crc kubenswrapper[4712]: E0130 19:17:27.801881 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:17:38 crc kubenswrapper[4712]: I0130 19:17:38.800180 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:17:38 crc kubenswrapper[4712]: E0130 19:17:38.801255 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:17:49 crc kubenswrapper[4712]: I0130 19:17:49.805080 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:17:49 crc kubenswrapper[4712]: E0130 19:17:49.805896 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:18:04 crc kubenswrapper[4712]: I0130 19:18:04.799727 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:18:04 crc kubenswrapper[4712]: E0130 19:18:04.800576 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:18:15 crc kubenswrapper[4712]: I0130 19:18:15.802192 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:18:15 crc kubenswrapper[4712]: E0130 19:18:15.804038 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:18:28 crc kubenswrapper[4712]: I0130 19:18:28.800588 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:18:28 crc kubenswrapper[4712]: E0130 19:18:28.801862 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:18:42 crc kubenswrapper[4712]: I0130 19:18:42.800724 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:18:42 crc kubenswrapper[4712]: E0130 19:18:42.803664 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:18:53 crc kubenswrapper[4712]: I0130 19:18:53.806882 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:18:53 crc kubenswrapper[4712]: E0130 19:18:53.807669 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:19:04 crc kubenswrapper[4712]: I0130 19:19:04.816651 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:19:04 crc kubenswrapper[4712]: E0130 19:19:04.817292 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:19:19 crc kubenswrapper[4712]: I0130 19:19:19.800949 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:19:19 crc kubenswrapper[4712]: E0130 19:19:19.801835 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:19:31 crc kubenswrapper[4712]: I0130 19:19:31.800420 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:19:31 crc kubenswrapper[4712]: E0130 19:19:31.801565 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:19:45 crc kubenswrapper[4712]: I0130 19:19:45.801417 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:19:45 crc kubenswrapper[4712]: E0130 19:19:45.802215 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:20:00 crc kubenswrapper[4712]: I0130 19:20:00.800050 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:20:00 crc kubenswrapper[4712]: E0130 19:20:00.800697 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:20:12 crc kubenswrapper[4712]: I0130 19:20:12.801506 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:20:12 crc kubenswrapper[4712]: E0130 19:20:12.804740 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:20:23 crc kubenswrapper[4712]: I0130 19:20:23.822660 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:20:23 crc kubenswrapper[4712]: E0130 19:20:23.825888 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:20:38 crc kubenswrapper[4712]: I0130 19:20:38.799874 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:20:38 crc kubenswrapper[4712]: E0130 19:20:38.800525 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:20:51 crc kubenswrapper[4712]: I0130 19:20:51.800138 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:20:51 crc kubenswrapper[4712]: E0130 19:20:51.801051 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:21:05 crc kubenswrapper[4712]: I0130 19:21:05.799837 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:21:05 crc kubenswrapper[4712]: E0130 19:21:05.801116 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:21:16 crc kubenswrapper[4712]: I0130 19:21:16.799444 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:21:16 crc kubenswrapper[4712]: E0130 19:21:16.800141 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:21:27 crc kubenswrapper[4712]: I0130 19:21:27.800780 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:21:27 crc kubenswrapper[4712]: E0130 19:21:27.802175 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:21:41 crc kubenswrapper[4712]: I0130 19:21:41.800009 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:21:42 crc kubenswrapper[4712]: I0130 19:21:42.230241 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"ded2923292e5c2c34d6fa2b092da2bd4640ab4ed507f66065f6089b3f0817bdb"} Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.619273 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gths8"] Jan 30 19:21:59 crc kubenswrapper[4712]: E0130 19:21:59.620372 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="registry-server" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.620394 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="registry-server" Jan 30 19:21:59 crc kubenswrapper[4712]: E0130 19:21:59.620419 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="extract-content" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.620428 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="extract-content" Jan 30 19:21:59 crc kubenswrapper[4712]: E0130 19:21:59.620460 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="extract-utilities" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.620469 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="extract-utilities" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.620740 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd3856d-f80f-4f1b-9c66-35cd400ab0d0" containerName="registry-server" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.622766 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.642024 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gths8"] Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.736566 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msjxl\" (UniqueName: \"kubernetes.io/projected/13d822c3-54d5-490f-8daa-53351e358ab5-kube-api-access-msjxl\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.736655 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-utilities\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.736780 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-catalog-content\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.838340 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msjxl\" (UniqueName: \"kubernetes.io/projected/13d822c3-54d5-490f-8daa-53351e358ab5-kube-api-access-msjxl\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.838392 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-utilities\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.838498 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-catalog-content\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.839051 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-catalog-content\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.839102 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-utilities\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.867364 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msjxl\" (UniqueName: \"kubernetes.io/projected/13d822c3-54d5-490f-8daa-53351e358ab5-kube-api-access-msjxl\") pod \"redhat-operators-gths8\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:21:59 crc kubenswrapper[4712]: I0130 19:21:59.946612 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:00 crc kubenswrapper[4712]: I0130 19:22:00.435284 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gths8"] Jan 30 19:22:01 crc kubenswrapper[4712]: I0130 19:22:01.426434 4712 generic.go:334] "Generic (PLEG): container finished" podID="13d822c3-54d5-490f-8daa-53351e358ab5" containerID="36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c" exitCode=0 Jan 30 19:22:01 crc kubenswrapper[4712]: I0130 19:22:01.426690 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerDied","Data":"36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c"} Jan 30 19:22:01 crc kubenswrapper[4712]: I0130 19:22:01.427516 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerStarted","Data":"602068e64b91b76eaeee02d1a6b3266b987327c7c4b434fbbe4601c6cf03799b"} Jan 30 19:22:01 crc kubenswrapper[4712]: I0130 19:22:01.432041 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:22:02 crc kubenswrapper[4712]: I0130 19:22:02.441949 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerStarted","Data":"c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e"} Jan 30 19:22:07 crc kubenswrapper[4712]: I0130 19:22:07.498812 4712 generic.go:334] "Generic (PLEG): container finished" podID="13d822c3-54d5-490f-8daa-53351e358ab5" containerID="c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e" exitCode=0 Jan 30 19:22:07 crc kubenswrapper[4712]: I0130 19:22:07.498872 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerDied","Data":"c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e"} Jan 30 19:22:08 crc kubenswrapper[4712]: I0130 19:22:08.515228 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerStarted","Data":"8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9"} Jan 30 19:22:08 crc kubenswrapper[4712]: I0130 19:22:08.542548 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gths8" podStartSLOduration=3.050386026 podStartE2EDuration="9.542526865s" podCreationTimestamp="2026-01-30 19:21:59 +0000 UTC" firstStartedPulling="2026-01-30 19:22:01.431542762 +0000 UTC m=+8858.338552271" lastFinishedPulling="2026-01-30 19:22:07.923683641 +0000 UTC m=+8864.830693110" observedRunningTime="2026-01-30 19:22:08.534387778 +0000 UTC m=+8865.441397277" watchObservedRunningTime="2026-01-30 19:22:08.542526865 +0000 UTC m=+8865.449536344" Jan 30 19:22:09 crc kubenswrapper[4712]: I0130 19:22:09.947676 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:09 crc kubenswrapper[4712]: I0130 19:22:09.948086 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:11 crc kubenswrapper[4712]: I0130 19:22:11.010432 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gths8" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" probeResult="failure" output=< Jan 30 19:22:11 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:22:11 crc kubenswrapper[4712]: > Jan 30 19:22:20 crc kubenswrapper[4712]: I0130 19:22:20.989589 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gths8" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" probeResult="failure" output=< Jan 30 19:22:20 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:22:20 crc kubenswrapper[4712]: > Jan 30 19:22:31 crc kubenswrapper[4712]: I0130 19:22:31.011569 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gths8" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" probeResult="failure" output=< Jan 30 19:22:31 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:22:31 crc kubenswrapper[4712]: > Jan 30 19:22:40 crc kubenswrapper[4712]: I0130 19:22:39.999428 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:40 crc kubenswrapper[4712]: I0130 19:22:40.074874 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:40 crc kubenswrapper[4712]: I0130 19:22:40.243837 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gths8"] Jan 30 19:22:41 crc kubenswrapper[4712]: I0130 19:22:41.847076 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gths8" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" containerID="cri-o://8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9" gracePeriod=2 Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.484242 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.655851 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msjxl\" (UniqueName: \"kubernetes.io/projected/13d822c3-54d5-490f-8daa-53351e358ab5-kube-api-access-msjxl\") pod \"13d822c3-54d5-490f-8daa-53351e358ab5\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.656140 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-catalog-content\") pod \"13d822c3-54d5-490f-8daa-53351e358ab5\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.656166 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-utilities\") pod \"13d822c3-54d5-490f-8daa-53351e358ab5\" (UID: \"13d822c3-54d5-490f-8daa-53351e358ab5\") " Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.656772 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-utilities" (OuterVolumeSpecName: "utilities") pod "13d822c3-54d5-490f-8daa-53351e358ab5" (UID: "13d822c3-54d5-490f-8daa-53351e358ab5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.662976 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13d822c3-54d5-490f-8daa-53351e358ab5-kube-api-access-msjxl" (OuterVolumeSpecName: "kube-api-access-msjxl") pod "13d822c3-54d5-490f-8daa-53351e358ab5" (UID: "13d822c3-54d5-490f-8daa-53351e358ab5"). InnerVolumeSpecName "kube-api-access-msjxl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.758081 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.758109 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msjxl\" (UniqueName: \"kubernetes.io/projected/13d822c3-54d5-490f-8daa-53351e358ab5-kube-api-access-msjxl\") on node \"crc\" DevicePath \"\"" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.784198 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13d822c3-54d5-490f-8daa-53351e358ab5" (UID: "13d822c3-54d5-490f-8daa-53351e358ab5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.857168 4712 generic.go:334] "Generic (PLEG): container finished" podID="13d822c3-54d5-490f-8daa-53351e358ab5" containerID="8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9" exitCode=0 Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.857255 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerDied","Data":"8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9"} Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.857293 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gths8" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.858016 4712 scope.go:117] "RemoveContainer" containerID="8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.857996 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gths8" event={"ID":"13d822c3-54d5-490f-8daa-53351e358ab5","Type":"ContainerDied","Data":"602068e64b91b76eaeee02d1a6b3266b987327c7c4b434fbbe4601c6cf03799b"} Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.859564 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13d822c3-54d5-490f-8daa-53351e358ab5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.886261 4712 scope.go:117] "RemoveContainer" containerID="c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.920980 4712 scope.go:117] "RemoveContainer" containerID="36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.921189 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gths8"] Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.931864 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gths8"] Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.966192 4712 scope.go:117] "RemoveContainer" containerID="8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9" Jan 30 19:22:42 crc kubenswrapper[4712]: E0130 19:22:42.966660 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9\": container with ID starting with 8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9 not found: ID does not exist" containerID="8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.966697 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9"} err="failed to get container status \"8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9\": rpc error: code = NotFound desc = could not find container \"8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9\": container with ID starting with 8a3070623a2d7cf0ef558007adbda2ed1122e9f10c91286d6007f9954164ecb9 not found: ID does not exist" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.966724 4712 scope.go:117] "RemoveContainer" containerID="c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e" Jan 30 19:22:42 crc kubenswrapper[4712]: E0130 19:22:42.967121 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e\": container with ID starting with c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e not found: ID does not exist" containerID="c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.967173 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e"} err="failed to get container status \"c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e\": rpc error: code = NotFound desc = could not find container \"c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e\": container with ID starting with c2267da890f894359e898035f4f62c275fbcd3814255d1675b4f17327773fb5e not found: ID does not exist" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.967205 4712 scope.go:117] "RemoveContainer" containerID="36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c" Jan 30 19:22:42 crc kubenswrapper[4712]: E0130 19:22:42.967550 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c\": container with ID starting with 36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c not found: ID does not exist" containerID="36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c" Jan 30 19:22:42 crc kubenswrapper[4712]: I0130 19:22:42.967575 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c"} err="failed to get container status \"36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c\": rpc error: code = NotFound desc = could not find container \"36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c\": container with ID starting with 36c96fc974776a97273f9829fd74badc58119a5aae254d3a445ea2accecdfe6c not found: ID does not exist" Jan 30 19:22:43 crc kubenswrapper[4712]: I0130 19:22:43.825963 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" path="/var/lib/kubelet/pods/13d822c3-54d5-490f-8daa-53351e358ab5/volumes" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.542552 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bfjc2"] Jan 30 19:22:58 crc kubenswrapper[4712]: E0130 19:22:58.543480 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.543495 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" Jan 30 19:22:58 crc kubenswrapper[4712]: E0130 19:22:58.543516 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="extract-utilities" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.543525 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="extract-utilities" Jan 30 19:22:58 crc kubenswrapper[4712]: E0130 19:22:58.543558 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="extract-content" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.543567 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="extract-content" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.543809 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="13d822c3-54d5-490f-8daa-53351e358ab5" containerName="registry-server" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.545556 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.566965 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfjc2"] Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.711145 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-729ss\" (UniqueName: \"kubernetes.io/projected/320cb49d-16cf-49f0-8236-e578627e9d3d-kube-api-access-729ss\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.711389 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-utilities\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.711570 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-catalog-content\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.813838 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-catalog-content\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.813945 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-729ss\" (UniqueName: \"kubernetes.io/projected/320cb49d-16cf-49f0-8236-e578627e9d3d-kube-api-access-729ss\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.813970 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-utilities\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.814459 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-utilities\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.814581 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-catalog-content\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.838816 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-729ss\" (UniqueName: \"kubernetes.io/projected/320cb49d-16cf-49f0-8236-e578627e9d3d-kube-api-access-729ss\") pod \"certified-operators-bfjc2\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:58 crc kubenswrapper[4712]: I0130 19:22:58.882483 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:22:59 crc kubenswrapper[4712]: I0130 19:22:59.447439 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bfjc2"] Jan 30 19:22:59 crc kubenswrapper[4712]: W0130 19:22:59.455256 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320cb49d_16cf_49f0_8236_e578627e9d3d.slice/crio-28d28a6c8be46611edd0e1249fb90fafcabfa2d1a9618e7aeaf71e75bb38faab WatchSource:0}: Error finding container 28d28a6c8be46611edd0e1249fb90fafcabfa2d1a9618e7aeaf71e75bb38faab: Status 404 returned error can't find the container with id 28d28a6c8be46611edd0e1249fb90fafcabfa2d1a9618e7aeaf71e75bb38faab Jan 30 19:23:00 crc kubenswrapper[4712]: I0130 19:23:00.023923 4712 generic.go:334] "Generic (PLEG): container finished" podID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerID="89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae" exitCode=0 Jan 30 19:23:00 crc kubenswrapper[4712]: I0130 19:23:00.023987 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerDied","Data":"89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae"} Jan 30 19:23:00 crc kubenswrapper[4712]: I0130 19:23:00.024026 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerStarted","Data":"28d28a6c8be46611edd0e1249fb90fafcabfa2d1a9618e7aeaf71e75bb38faab"} Jan 30 19:23:01 crc kubenswrapper[4712]: I0130 19:23:01.034196 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerStarted","Data":"16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab"} Jan 30 19:23:03 crc kubenswrapper[4712]: I0130 19:23:03.051640 4712 generic.go:334] "Generic (PLEG): container finished" podID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerID="16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab" exitCode=0 Jan 30 19:23:03 crc kubenswrapper[4712]: I0130 19:23:03.051753 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerDied","Data":"16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab"} Jan 30 19:23:04 crc kubenswrapper[4712]: I0130 19:23:04.064211 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerStarted","Data":"6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf"} Jan 30 19:23:04 crc kubenswrapper[4712]: I0130 19:23:04.095189 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bfjc2" podStartSLOduration=2.585091516 podStartE2EDuration="6.095166136s" podCreationTimestamp="2026-01-30 19:22:58 +0000 UTC" firstStartedPulling="2026-01-30 19:23:00.027994374 +0000 UTC m=+8916.935003843" lastFinishedPulling="2026-01-30 19:23:03.538068964 +0000 UTC m=+8920.445078463" observedRunningTime="2026-01-30 19:23:04.087082531 +0000 UTC m=+8920.994092010" watchObservedRunningTime="2026-01-30 19:23:04.095166136 +0000 UTC m=+8921.002175615" Jan 30 19:23:08 crc kubenswrapper[4712]: I0130 19:23:08.883514 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:23:08 crc kubenswrapper[4712]: I0130 19:23:08.884068 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:23:08 crc kubenswrapper[4712]: I0130 19:23:08.937133 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:23:09 crc kubenswrapper[4712]: I0130 19:23:09.168430 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:23:09 crc kubenswrapper[4712]: I0130 19:23:09.221903 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfjc2"] Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.141187 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bfjc2" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="registry-server" containerID="cri-o://6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf" gracePeriod=2 Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.741153 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.804117 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-729ss\" (UniqueName: \"kubernetes.io/projected/320cb49d-16cf-49f0-8236-e578627e9d3d-kube-api-access-729ss\") pod \"320cb49d-16cf-49f0-8236-e578627e9d3d\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.804513 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-catalog-content\") pod \"320cb49d-16cf-49f0-8236-e578627e9d3d\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.804703 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-utilities\") pod \"320cb49d-16cf-49f0-8236-e578627e9d3d\" (UID: \"320cb49d-16cf-49f0-8236-e578627e9d3d\") " Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.806203 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-utilities" (OuterVolumeSpecName: "utilities") pod "320cb49d-16cf-49f0-8236-e578627e9d3d" (UID: "320cb49d-16cf-49f0-8236-e578627e9d3d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.816248 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320cb49d-16cf-49f0-8236-e578627e9d3d-kube-api-access-729ss" (OuterVolumeSpecName: "kube-api-access-729ss") pod "320cb49d-16cf-49f0-8236-e578627e9d3d" (UID: "320cb49d-16cf-49f0-8236-e578627e9d3d"). InnerVolumeSpecName "kube-api-access-729ss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.869265 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "320cb49d-16cf-49f0-8236-e578627e9d3d" (UID: "320cb49d-16cf-49f0-8236-e578627e9d3d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.907743 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-729ss\" (UniqueName: \"kubernetes.io/projected/320cb49d-16cf-49f0-8236-e578627e9d3d-kube-api-access-729ss\") on node \"crc\" DevicePath \"\"" Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.907817 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:23:11 crc kubenswrapper[4712]: I0130 19:23:11.907830 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/320cb49d-16cf-49f0-8236-e578627e9d3d-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.153101 4712 generic.go:334] "Generic (PLEG): container finished" podID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerID="6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf" exitCode=0 Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.153346 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerDied","Data":"6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf"} Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.153376 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bfjc2" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.153413 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bfjc2" event={"ID":"320cb49d-16cf-49f0-8236-e578627e9d3d","Type":"ContainerDied","Data":"28d28a6c8be46611edd0e1249fb90fafcabfa2d1a9618e7aeaf71e75bb38faab"} Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.153465 4712 scope.go:117] "RemoveContainer" containerID="6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.183752 4712 scope.go:117] "RemoveContainer" containerID="16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.219241 4712 scope.go:117] "RemoveContainer" containerID="89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.224007 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bfjc2"] Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.234076 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bfjc2"] Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.266208 4712 scope.go:117] "RemoveContainer" containerID="6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf" Jan 30 19:23:12 crc kubenswrapper[4712]: E0130 19:23:12.267176 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf\": container with ID starting with 6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf not found: ID does not exist" containerID="6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.267223 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf"} err="failed to get container status \"6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf\": rpc error: code = NotFound desc = could not find container \"6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf\": container with ID starting with 6446ce14e2d5c4435e2b081820d19b104fdd51238c27b3f98b30ca95f43d7ebf not found: ID does not exist" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.267250 4712 scope.go:117] "RemoveContainer" containerID="16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab" Jan 30 19:23:12 crc kubenswrapper[4712]: E0130 19:23:12.267641 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab\": container with ID starting with 16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab not found: ID does not exist" containerID="16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.267756 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab"} err="failed to get container status \"16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab\": rpc error: code = NotFound desc = could not find container \"16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab\": container with ID starting with 16bce19f326955ae5dc2e2154e0adc1c3fd2e59a981bb1b9f072131f888396ab not found: ID does not exist" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.267881 4712 scope.go:117] "RemoveContainer" containerID="89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae" Jan 30 19:23:12 crc kubenswrapper[4712]: E0130 19:23:12.268338 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae\": container with ID starting with 89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae not found: ID does not exist" containerID="89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae" Jan 30 19:23:12 crc kubenswrapper[4712]: I0130 19:23:12.268454 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae"} err="failed to get container status \"89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae\": rpc error: code = NotFound desc = could not find container \"89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae\": container with ID starting with 89941e3c51072e79658c3e41a8b5bfcb333c16d75b2776622eae14a35bb3e9ae not found: ID does not exist" Jan 30 19:23:13 crc kubenswrapper[4712]: I0130 19:23:13.813502 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" path="/var/lib/kubelet/pods/320cb49d-16cf-49f0-8236-e578627e9d3d/volumes" Jan 30 19:24:06 crc kubenswrapper[4712]: I0130 19:24:06.271677 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:24:06 crc kubenswrapper[4712]: I0130 19:24:06.272324 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:24:36 crc kubenswrapper[4712]: I0130 19:24:36.270933 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:24:36 crc kubenswrapper[4712]: I0130 19:24:36.271535 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.251701 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8sk"] Jan 30 19:24:49 crc kubenswrapper[4712]: E0130 19:24:49.252826 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="registry-server" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.252842 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="registry-server" Jan 30 19:24:49 crc kubenswrapper[4712]: E0130 19:24:49.252891 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="extract-utilities" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.252900 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="extract-utilities" Jan 30 19:24:49 crc kubenswrapper[4712]: E0130 19:24:49.252917 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="extract-content" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.252927 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="extract-content" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.253136 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="320cb49d-16cf-49f0-8236-e578627e9d3d" containerName="registry-server" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.254725 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.334512 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8sk"] Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.417408 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8vh5\" (UniqueName: \"kubernetes.io/projected/c4302936-4225-44d3-a0ee-fee436100a38-kube-api-access-r8vh5\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.417535 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-catalog-content\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.417607 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-utilities\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.519202 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-catalog-content\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.519662 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-utilities\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.519713 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-catalog-content\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.520022 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8vh5\" (UniqueName: \"kubernetes.io/projected/c4302936-4225-44d3-a0ee-fee436100a38-kube-api-access-r8vh5\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.520258 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-utilities\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.546010 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8vh5\" (UniqueName: \"kubernetes.io/projected/c4302936-4225-44d3-a0ee-fee436100a38-kube-api-access-r8vh5\") pod \"redhat-marketplace-rw8sk\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:49 crc kubenswrapper[4712]: I0130 19:24:49.582389 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:50 crc kubenswrapper[4712]: I0130 19:24:50.107275 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8sk"] Jan 30 19:24:50 crc kubenswrapper[4712]: I0130 19:24:50.223753 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerStarted","Data":"bd664e0f214e44df8a5bc35135aa4b3dff7592ae36ee784e644fb15df0912d7f"} Jan 30 19:24:51 crc kubenswrapper[4712]: I0130 19:24:51.233537 4712 generic.go:334] "Generic (PLEG): container finished" podID="c4302936-4225-44d3-a0ee-fee436100a38" containerID="8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2" exitCode=0 Jan 30 19:24:51 crc kubenswrapper[4712]: I0130 19:24:51.233609 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerDied","Data":"8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2"} Jan 30 19:24:52 crc kubenswrapper[4712]: I0130 19:24:52.252619 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerStarted","Data":"719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867"} Jan 30 19:24:53 crc kubenswrapper[4712]: I0130 19:24:53.263046 4712 generic.go:334] "Generic (PLEG): container finished" podID="c4302936-4225-44d3-a0ee-fee436100a38" containerID="719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867" exitCode=0 Jan 30 19:24:53 crc kubenswrapper[4712]: I0130 19:24:53.263112 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerDied","Data":"719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867"} Jan 30 19:24:54 crc kubenswrapper[4712]: I0130 19:24:54.278404 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerStarted","Data":"1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371"} Jan 30 19:24:54 crc kubenswrapper[4712]: I0130 19:24:54.309627 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-rw8sk" podStartSLOduration=2.839158317 podStartE2EDuration="5.309605944s" podCreationTimestamp="2026-01-30 19:24:49 +0000 UTC" firstStartedPulling="2026-01-30 19:24:51.235729756 +0000 UTC m=+9028.142739225" lastFinishedPulling="2026-01-30 19:24:53.706177343 +0000 UTC m=+9030.613186852" observedRunningTime="2026-01-30 19:24:54.299568202 +0000 UTC m=+9031.206577681" watchObservedRunningTime="2026-01-30 19:24:54.309605944 +0000 UTC m=+9031.216615413" Jan 30 19:24:59 crc kubenswrapper[4712]: I0130 19:24:59.582725 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:59 crc kubenswrapper[4712]: I0130 19:24:59.583623 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:24:59 crc kubenswrapper[4712]: I0130 19:24:59.631458 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:25:00 crc kubenswrapper[4712]: I0130 19:25:00.425904 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:25:00 crc kubenswrapper[4712]: I0130 19:25:00.517678 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8sk"] Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.359036 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-rw8sk" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="registry-server" containerID="cri-o://1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371" gracePeriod=2 Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.830939 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.989873 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-utilities\") pod \"c4302936-4225-44d3-a0ee-fee436100a38\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.990150 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-catalog-content\") pod \"c4302936-4225-44d3-a0ee-fee436100a38\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.990806 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-utilities" (OuterVolumeSpecName: "utilities") pod "c4302936-4225-44d3-a0ee-fee436100a38" (UID: "c4302936-4225-44d3-a0ee-fee436100a38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.991214 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8vh5\" (UniqueName: \"kubernetes.io/projected/c4302936-4225-44d3-a0ee-fee436100a38-kube-api-access-r8vh5\") pod \"c4302936-4225-44d3-a0ee-fee436100a38\" (UID: \"c4302936-4225-44d3-a0ee-fee436100a38\") " Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.992029 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:25:02 crc kubenswrapper[4712]: I0130 19:25:02.996049 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4302936-4225-44d3-a0ee-fee436100a38-kube-api-access-r8vh5" (OuterVolumeSpecName: "kube-api-access-r8vh5") pod "c4302936-4225-44d3-a0ee-fee436100a38" (UID: "c4302936-4225-44d3-a0ee-fee436100a38"). InnerVolumeSpecName "kube-api-access-r8vh5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.011083 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4302936-4225-44d3-a0ee-fee436100a38" (UID: "c4302936-4225-44d3-a0ee-fee436100a38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.093776 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4302936-4225-44d3-a0ee-fee436100a38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.093821 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8vh5\" (UniqueName: \"kubernetes.io/projected/c4302936-4225-44d3-a0ee-fee436100a38-kube-api-access-r8vh5\") on node \"crc\" DevicePath \"\"" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.370275 4712 generic.go:334] "Generic (PLEG): container finished" podID="c4302936-4225-44d3-a0ee-fee436100a38" containerID="1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371" exitCode=0 Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.370335 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerDied","Data":"1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371"} Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.370399 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-rw8sk" event={"ID":"c4302936-4225-44d3-a0ee-fee436100a38","Type":"ContainerDied","Data":"bd664e0f214e44df8a5bc35135aa4b3dff7592ae36ee784e644fb15df0912d7f"} Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.370432 4712 scope.go:117] "RemoveContainer" containerID="1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.370349 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-rw8sk" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.396491 4712 scope.go:117] "RemoveContainer" containerID="719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.420396 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8sk"] Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.429010 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-rw8sk"] Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.438119 4712 scope.go:117] "RemoveContainer" containerID="8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.473816 4712 scope.go:117] "RemoveContainer" containerID="1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371" Jan 30 19:25:03 crc kubenswrapper[4712]: E0130 19:25:03.474380 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371\": container with ID starting with 1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371 not found: ID does not exist" containerID="1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.474454 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371"} err="failed to get container status \"1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371\": rpc error: code = NotFound desc = could not find container \"1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371\": container with ID starting with 1707fed0bd0e0d6b3e93d92c2337f2844148b43d6f1f3eae89fdfd2b060b3371 not found: ID does not exist" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.474486 4712 scope.go:117] "RemoveContainer" containerID="719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867" Jan 30 19:25:03 crc kubenswrapper[4712]: E0130 19:25:03.475056 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867\": container with ID starting with 719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867 not found: ID does not exist" containerID="719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.475107 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867"} err="failed to get container status \"719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867\": rpc error: code = NotFound desc = could not find container \"719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867\": container with ID starting with 719e2728616a17d90883f0ef88962b8dc7ad91e5ab8d5d8dc6755c4a8cf95867 not found: ID does not exist" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.475236 4712 scope.go:117] "RemoveContainer" containerID="8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2" Jan 30 19:25:03 crc kubenswrapper[4712]: E0130 19:25:03.475771 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2\": container with ID starting with 8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2 not found: ID does not exist" containerID="8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.475834 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2"} err="failed to get container status \"8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2\": rpc error: code = NotFound desc = could not find container \"8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2\": container with ID starting with 8d7b6a1ed8539e5ec5bba3b3f96889bb42feba9a0ef14cbd5f89b3d6875301d2 not found: ID does not exist" Jan 30 19:25:03 crc kubenswrapper[4712]: I0130 19:25:03.809704 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4302936-4225-44d3-a0ee-fee436100a38" path="/var/lib/kubelet/pods/c4302936-4225-44d3-a0ee-fee436100a38/volumes" Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.271591 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.272216 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.272300 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.273692 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ded2923292e5c2c34d6fa2b092da2bd4640ab4ed507f66065f6089b3f0817bdb"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.273862 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://ded2923292e5c2c34d6fa2b092da2bd4640ab4ed507f66065f6089b3f0817bdb" gracePeriod=600 Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.416306 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="ded2923292e5c2c34d6fa2b092da2bd4640ab4ed507f66065f6089b3f0817bdb" exitCode=0 Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.416350 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"ded2923292e5c2c34d6fa2b092da2bd4640ab4ed507f66065f6089b3f0817bdb"} Jan 30 19:25:06 crc kubenswrapper[4712]: I0130 19:25:06.416384 4712 scope.go:117] "RemoveContainer" containerID="0a181628296f0743982093f4bb011e4195b8710b359c5bef2c9edca8fb5e8c5f" Jan 30 19:25:07 crc kubenswrapper[4712]: I0130 19:25:07.430780 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be"} Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.965515 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fd5nb"] Jan 30 19:26:07 crc kubenswrapper[4712]: E0130 19:26:07.966486 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="extract-utilities" Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.966500 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="extract-utilities" Jan 30 19:26:07 crc kubenswrapper[4712]: E0130 19:26:07.966529 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="registry-server" Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.966536 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="registry-server" Jan 30 19:26:07 crc kubenswrapper[4712]: E0130 19:26:07.966554 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="extract-content" Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.966559 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="extract-content" Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.966733 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4302936-4225-44d3-a0ee-fee436100a38" containerName="registry-server" Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.968096 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:07 crc kubenswrapper[4712]: I0130 19:26:07.986000 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd5nb"] Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.106550 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-catalog-content\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.106762 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbb5r\" (UniqueName: \"kubernetes.io/projected/f0861a85-b521-484f-a52f-cdf5bbf127a2-kube-api-access-rbb5r\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.106838 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-utilities\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.208231 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-catalog-content\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.208376 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbb5r\" (UniqueName: \"kubernetes.io/projected/f0861a85-b521-484f-a52f-cdf5bbf127a2-kube-api-access-rbb5r\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.208401 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-utilities\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.208962 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-utilities\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.209182 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-catalog-content\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.231707 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbb5r\" (UniqueName: \"kubernetes.io/projected/f0861a85-b521-484f-a52f-cdf5bbf127a2-kube-api-access-rbb5r\") pod \"community-operators-fd5nb\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.311591 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:08 crc kubenswrapper[4712]: I0130 19:26:08.776031 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fd5nb"] Jan 30 19:26:09 crc kubenswrapper[4712]: I0130 19:26:09.063246 4712 generic.go:334] "Generic (PLEG): container finished" podID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerID="a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf" exitCode=0 Jan 30 19:26:09 crc kubenswrapper[4712]: I0130 19:26:09.063284 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerDied","Data":"a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf"} Jan 30 19:26:09 crc kubenswrapper[4712]: I0130 19:26:09.063312 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerStarted","Data":"23340175cd2cf4286799998b24bc78013480ba07203da750671dcee12f24f0c4"} Jan 30 19:26:10 crc kubenswrapper[4712]: I0130 19:26:10.074822 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerStarted","Data":"2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba"} Jan 30 19:26:12 crc kubenswrapper[4712]: I0130 19:26:12.111636 4712 generic.go:334] "Generic (PLEG): container finished" podID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerID="2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba" exitCode=0 Jan 30 19:26:12 crc kubenswrapper[4712]: I0130 19:26:12.111736 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerDied","Data":"2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba"} Jan 30 19:26:13 crc kubenswrapper[4712]: I0130 19:26:13.124893 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerStarted","Data":"e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f"} Jan 30 19:26:13 crc kubenswrapper[4712]: I0130 19:26:13.151210 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fd5nb" podStartSLOduration=2.454951286 podStartE2EDuration="6.151187705s" podCreationTimestamp="2026-01-30 19:26:07 +0000 UTC" firstStartedPulling="2026-01-30 19:26:09.065067476 +0000 UTC m=+9105.972076945" lastFinishedPulling="2026-01-30 19:26:12.761303895 +0000 UTC m=+9109.668313364" observedRunningTime="2026-01-30 19:26:13.143784597 +0000 UTC m=+9110.050794066" watchObservedRunningTime="2026-01-30 19:26:13.151187705 +0000 UTC m=+9110.058197174" Jan 30 19:26:18 crc kubenswrapper[4712]: I0130 19:26:18.312392 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:18 crc kubenswrapper[4712]: I0130 19:26:18.313059 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:19 crc kubenswrapper[4712]: I0130 19:26:19.374229 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fd5nb" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="registry-server" probeResult="failure" output=< Jan 30 19:26:19 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:26:19 crc kubenswrapper[4712]: > Jan 30 19:26:28 crc kubenswrapper[4712]: I0130 19:26:28.377641 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:28 crc kubenswrapper[4712]: I0130 19:26:28.443208 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:28 crc kubenswrapper[4712]: I0130 19:26:28.634839 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd5nb"] Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.275591 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fd5nb" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="registry-server" containerID="cri-o://e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f" gracePeriod=2 Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.775919 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.863182 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbb5r\" (UniqueName: \"kubernetes.io/projected/f0861a85-b521-484f-a52f-cdf5bbf127a2-kube-api-access-rbb5r\") pod \"f0861a85-b521-484f-a52f-cdf5bbf127a2\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.863256 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-catalog-content\") pod \"f0861a85-b521-484f-a52f-cdf5bbf127a2\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.863283 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-utilities\") pod \"f0861a85-b521-484f-a52f-cdf5bbf127a2\" (UID: \"f0861a85-b521-484f-a52f-cdf5bbf127a2\") " Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.864152 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-utilities" (OuterVolumeSpecName: "utilities") pod "f0861a85-b521-484f-a52f-cdf5bbf127a2" (UID: "f0861a85-b521-484f-a52f-cdf5bbf127a2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.874137 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0861a85-b521-484f-a52f-cdf5bbf127a2-kube-api-access-rbb5r" (OuterVolumeSpecName: "kube-api-access-rbb5r") pod "f0861a85-b521-484f-a52f-cdf5bbf127a2" (UID: "f0861a85-b521-484f-a52f-cdf5bbf127a2"). InnerVolumeSpecName "kube-api-access-rbb5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.920637 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0861a85-b521-484f-a52f-cdf5bbf127a2" (UID: "f0861a85-b521-484f-a52f-cdf5bbf127a2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.965773 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.965821 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0861a85-b521-484f-a52f-cdf5bbf127a2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:26:30 crc kubenswrapper[4712]: I0130 19:26:30.965834 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbb5r\" (UniqueName: \"kubernetes.io/projected/f0861a85-b521-484f-a52f-cdf5bbf127a2-kube-api-access-rbb5r\") on node \"crc\" DevicePath \"\"" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.285654 4712 generic.go:334] "Generic (PLEG): container finished" podID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerID="e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f" exitCode=0 Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.285692 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerDied","Data":"e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f"} Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.285718 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fd5nb" event={"ID":"f0861a85-b521-484f-a52f-cdf5bbf127a2","Type":"ContainerDied","Data":"23340175cd2cf4286799998b24bc78013480ba07203da750671dcee12f24f0c4"} Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.285733 4712 scope.go:117] "RemoveContainer" containerID="e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.285745 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fd5nb" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.310552 4712 scope.go:117] "RemoveContainer" containerID="2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.348469 4712 scope.go:117] "RemoveContainer" containerID="a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.350742 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fd5nb"] Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.360671 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fd5nb"] Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.369881 4712 scope.go:117] "RemoveContainer" containerID="e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f" Jan 30 19:26:31 crc kubenswrapper[4712]: E0130 19:26:31.370320 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f\": container with ID starting with e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f not found: ID does not exist" containerID="e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.370348 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f"} err="failed to get container status \"e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f\": rpc error: code = NotFound desc = could not find container \"e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f\": container with ID starting with e1af03729afebb45bb8084e8acc5ae26e7e3b16e472015b76b940e4b9a88e90f not found: ID does not exist" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.370370 4712 scope.go:117] "RemoveContainer" containerID="2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba" Jan 30 19:26:31 crc kubenswrapper[4712]: E0130 19:26:31.370702 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba\": container with ID starting with 2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba not found: ID does not exist" containerID="2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.370726 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba"} err="failed to get container status \"2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba\": rpc error: code = NotFound desc = could not find container \"2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba\": container with ID starting with 2cd4036eead91e98368fc075db515cf65c00fff173ce85b9977d8bf03bd040ba not found: ID does not exist" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.370742 4712 scope.go:117] "RemoveContainer" containerID="a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf" Jan 30 19:26:31 crc kubenswrapper[4712]: E0130 19:26:31.370995 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf\": container with ID starting with a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf not found: ID does not exist" containerID="a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.371016 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf"} err="failed to get container status \"a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf\": rpc error: code = NotFound desc = could not find container \"a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf\": container with ID starting with a08998149e2bdbb96dcbf8c6875f2393a4fc941e3a149e75b90e2a07b21430bf not found: ID does not exist" Jan 30 19:26:31 crc kubenswrapper[4712]: I0130 19:26:31.812826 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" path="/var/lib/kubelet/pods/f0861a85-b521-484f-a52f-cdf5bbf127a2/volumes" Jan 30 19:27:06 crc kubenswrapper[4712]: I0130 19:27:06.270841 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:27:06 crc kubenswrapper[4712]: I0130 19:27:06.271293 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:27:36 crc kubenswrapper[4712]: I0130 19:27:36.271424 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:27:36 crc kubenswrapper[4712]: I0130 19:27:36.272061 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:28:06 crc kubenswrapper[4712]: I0130 19:28:06.270898 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:28:06 crc kubenswrapper[4712]: I0130 19:28:06.273623 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:28:06 crc kubenswrapper[4712]: I0130 19:28:06.273870 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:28:06 crc kubenswrapper[4712]: I0130 19:28:06.275906 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:28:06 crc kubenswrapper[4712]: I0130 19:28:06.276372 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" gracePeriod=600 Jan 30 19:28:06 crc kubenswrapper[4712]: E0130 19:28:06.402736 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:28:07 crc kubenswrapper[4712]: I0130 19:28:07.272581 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" exitCode=0 Jan 30 19:28:07 crc kubenswrapper[4712]: I0130 19:28:07.272644 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be"} Jan 30 19:28:07 crc kubenswrapper[4712]: I0130 19:28:07.273185 4712 scope.go:117] "RemoveContainer" containerID="ded2923292e5c2c34d6fa2b092da2bd4640ab4ed507f66065f6089b3f0817bdb" Jan 30 19:28:07 crc kubenswrapper[4712]: I0130 19:28:07.274458 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:28:07 crc kubenswrapper[4712]: E0130 19:28:07.274768 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:28:20 crc kubenswrapper[4712]: I0130 19:28:20.800434 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:28:20 crc kubenswrapper[4712]: E0130 19:28:20.803701 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:28:33 crc kubenswrapper[4712]: I0130 19:28:33.812790 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:28:33 crc kubenswrapper[4712]: E0130 19:28:33.813767 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:28:49 crc kubenswrapper[4712]: I0130 19:28:49.801931 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:28:49 crc kubenswrapper[4712]: E0130 19:28:49.802724 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:29:04 crc kubenswrapper[4712]: I0130 19:29:04.799513 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:29:04 crc kubenswrapper[4712]: E0130 19:29:04.800264 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:29:19 crc kubenswrapper[4712]: I0130 19:29:19.800482 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:29:19 crc kubenswrapper[4712]: E0130 19:29:19.801304 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:29:33 crc kubenswrapper[4712]: I0130 19:29:33.812494 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:29:33 crc kubenswrapper[4712]: E0130 19:29:33.813874 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:29:41 crc kubenswrapper[4712]: E0130 19:29:41.683480 4712 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.246:52208->38.102.83.246:35825: read tcp 38.102.83.246:52208->38.102.83.246:35825: read: connection reset by peer Jan 30 19:29:47 crc kubenswrapper[4712]: I0130 19:29:47.801953 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:29:47 crc kubenswrapper[4712]: E0130 19:29:47.802524 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:29:59 crc kubenswrapper[4712]: I0130 19:29:59.799888 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:29:59 crc kubenswrapper[4712]: E0130 19:29:59.800773 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.164310 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25"] Jan 30 19:30:00 crc kubenswrapper[4712]: E0130 19:30:00.165323 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="extract-utilities" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.165420 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="extract-utilities" Jan 30 19:30:00 crc kubenswrapper[4712]: E0130 19:30:00.165497 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="registry-server" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.165570 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="registry-server" Jan 30 19:30:00 crc kubenswrapper[4712]: E0130 19:30:00.165662 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="extract-content" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.165757 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="extract-content" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.166077 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0861a85-b521-484f-a52f-cdf5bbf127a2" containerName="registry-server" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.166970 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.176771 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.177134 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.189832 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25"] Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.262624 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbm9\" (UniqueName: \"kubernetes.io/projected/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-kube-api-access-pgbm9\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.262829 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-secret-volume\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.263061 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-config-volume\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.364366 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgbm9\" (UniqueName: \"kubernetes.io/projected/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-kube-api-access-pgbm9\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.364435 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-secret-volume\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.364503 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-config-volume\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.365273 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-config-volume\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.376544 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-secret-volume\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.396644 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgbm9\" (UniqueName: \"kubernetes.io/projected/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-kube-api-access-pgbm9\") pod \"collect-profiles-29496690-gfn25\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:00 crc kubenswrapper[4712]: I0130 19:30:00.496459 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:01 crc kubenswrapper[4712]: I0130 19:30:01.062742 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25"] Jan 30 19:30:01 crc kubenswrapper[4712]: I0130 19:30:01.520065 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" event={"ID":"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321","Type":"ContainerStarted","Data":"c279049692c9ab6f762edd197070cec6f41dabe2450251f339b0706a53c0204e"} Jan 30 19:30:01 crc kubenswrapper[4712]: I0130 19:30:01.520116 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" event={"ID":"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321","Type":"ContainerStarted","Data":"92a05794a7bbf3e26ce6a2233dad5f1e8bcc30dd2b831aee3e946c825f1955c0"} Jan 30 19:30:01 crc kubenswrapper[4712]: I0130 19:30:01.537872 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" podStartSLOduration=1.537858663 podStartE2EDuration="1.537858663s" podCreationTimestamp="2026-01-30 19:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 19:30:01.535268331 +0000 UTC m=+9338.442277810" watchObservedRunningTime="2026-01-30 19:30:01.537858663 +0000 UTC m=+9338.444868132" Jan 30 19:30:02 crc kubenswrapper[4712]: I0130 19:30:02.530741 4712 generic.go:334] "Generic (PLEG): container finished" podID="b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" containerID="c279049692c9ab6f762edd197070cec6f41dabe2450251f339b0706a53c0204e" exitCode=0 Jan 30 19:30:02 crc kubenswrapper[4712]: I0130 19:30:02.530852 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" event={"ID":"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321","Type":"ContainerDied","Data":"c279049692c9ab6f762edd197070cec6f41dabe2450251f339b0706a53c0204e"} Jan 30 19:30:03 crc kubenswrapper[4712]: I0130 19:30:03.949458 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.036841 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-secret-volume\") pod \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.036918 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-config-volume\") pod \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.037020 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgbm9\" (UniqueName: \"kubernetes.io/projected/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-kube-api-access-pgbm9\") pod \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\" (UID: \"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321\") " Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.037458 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" (UID: "b5f95fe7-59ed-4dbf-abb3-4c2b21cce321"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.041940 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" (UID: "b5f95fe7-59ed-4dbf-abb3-4c2b21cce321"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.051318 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-kube-api-access-pgbm9" (OuterVolumeSpecName: "kube-api-access-pgbm9") pod "b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" (UID: "b5f95fe7-59ed-4dbf-abb3-4c2b21cce321"). InnerVolumeSpecName "kube-api-access-pgbm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.139019 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.139046 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.139057 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgbm9\" (UniqueName: \"kubernetes.io/projected/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321-kube-api-access-pgbm9\") on node \"crc\" DevicePath \"\"" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.548323 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" event={"ID":"b5f95fe7-59ed-4dbf-abb3-4c2b21cce321","Type":"ContainerDied","Data":"92a05794a7bbf3e26ce6a2233dad5f1e8bcc30dd2b831aee3e946c825f1955c0"} Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.548360 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92a05794a7bbf3e26ce6a2233dad5f1e8bcc30dd2b831aee3e946c825f1955c0" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.548384 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25" Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.619202 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5"] Jan 30 19:30:04 crc kubenswrapper[4712]: I0130 19:30:04.626620 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496645-qkjs5"] Jan 30 19:30:05 crc kubenswrapper[4712]: I0130 19:30:05.814768 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65cb453f-1797-4e40-9cb8-612b3beaa871" path="/var/lib/kubelet/pods/65cb453f-1797-4e40-9cb8-612b3beaa871/volumes" Jan 30 19:30:11 crc kubenswrapper[4712]: I0130 19:30:11.800436 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:30:11 crc kubenswrapper[4712]: E0130 19:30:11.801311 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:30:24 crc kubenswrapper[4712]: I0130 19:30:24.799765 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:30:24 crc kubenswrapper[4712]: E0130 19:30:24.800542 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:30:35 crc kubenswrapper[4712]: I0130 19:30:35.799692 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:30:35 crc kubenswrapper[4712]: E0130 19:30:35.800622 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:30:50 crc kubenswrapper[4712]: I0130 19:30:50.800303 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:30:50 crc kubenswrapper[4712]: E0130 19:30:50.801480 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:31:04 crc kubenswrapper[4712]: I0130 19:31:04.886695 4712 scope.go:117] "RemoveContainer" containerID="25eda162b96d3d5a73e884be7a1e55e70e6cfb9a9a2e94ce712f34e4eeda991b" Jan 30 19:31:05 crc kubenswrapper[4712]: I0130 19:31:05.800388 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:31:05 crc kubenswrapper[4712]: E0130 19:31:05.800885 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:31:16 crc kubenswrapper[4712]: I0130 19:31:16.801347 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:31:16 crc kubenswrapper[4712]: E0130 19:31:16.802271 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:31:30 crc kubenswrapper[4712]: I0130 19:31:30.800045 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:31:30 crc kubenswrapper[4712]: E0130 19:31:30.800895 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:31:42 crc kubenswrapper[4712]: I0130 19:31:42.799782 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:31:42 crc kubenswrapper[4712]: E0130 19:31:42.800643 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:31:53 crc kubenswrapper[4712]: I0130 19:31:53.806419 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:31:53 crc kubenswrapper[4712]: E0130 19:31:53.807262 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:32:06 crc kubenswrapper[4712]: I0130 19:32:06.799668 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:32:06 crc kubenswrapper[4712]: E0130 19:32:06.801226 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.048376 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c6zlh"] Jan 30 19:32:09 crc kubenswrapper[4712]: E0130 19:32:09.049008 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" containerName="collect-profiles" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.049020 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" containerName="collect-profiles" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.049195 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" containerName="collect-profiles" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.050592 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.076503 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6zlh"] Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.263820 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n29bt\" (UniqueName: \"kubernetes.io/projected/b6dda193-3f46-4e68-858d-9a8c2393acd3-kube-api-access-n29bt\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.264028 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-utilities\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.264171 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-catalog-content\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.366278 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n29bt\" (UniqueName: \"kubernetes.io/projected/b6dda193-3f46-4e68-858d-9a8c2393acd3-kube-api-access-n29bt\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.366384 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-utilities\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.366454 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-catalog-content\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.367051 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-catalog-content\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.367149 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-utilities\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.399858 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n29bt\" (UniqueName: \"kubernetes.io/projected/b6dda193-3f46-4e68-858d-9a8c2393acd3-kube-api-access-n29bt\") pod \"redhat-operators-c6zlh\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:09 crc kubenswrapper[4712]: I0130 19:32:09.676347 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:10 crc kubenswrapper[4712]: I0130 19:32:10.645886 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c6zlh"] Jan 30 19:32:10 crc kubenswrapper[4712]: I0130 19:32:10.843387 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerStarted","Data":"6bf029cd7eddd43c4a48208b11e201e9591a0b74e44ca1d9227ef2c3e2074abd"} Jan 30 19:32:11 crc kubenswrapper[4712]: I0130 19:32:11.855946 4712 generic.go:334] "Generic (PLEG): container finished" podID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerID="997389f5461ff45f56a5ffdfa59c474594a9c77b814cbc64d07776ff1d0564e8" exitCode=0 Jan 30 19:32:11 crc kubenswrapper[4712]: I0130 19:32:11.856022 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerDied","Data":"997389f5461ff45f56a5ffdfa59c474594a9c77b814cbc64d07776ff1d0564e8"} Jan 30 19:32:11 crc kubenswrapper[4712]: I0130 19:32:11.858247 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:32:12 crc kubenswrapper[4712]: I0130 19:32:12.867064 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerStarted","Data":"49fdfa8393daf8e7e3124581c41bb729f9a21904c6afaa257f21c1eadced501b"} Jan 30 19:32:18 crc kubenswrapper[4712]: I0130 19:32:18.966487 4712 generic.go:334] "Generic (PLEG): container finished" podID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerID="49fdfa8393daf8e7e3124581c41bb729f9a21904c6afaa257f21c1eadced501b" exitCode=0 Jan 30 19:32:18 crc kubenswrapper[4712]: I0130 19:32:18.966771 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerDied","Data":"49fdfa8393daf8e7e3124581c41bb729f9a21904c6afaa257f21c1eadced501b"} Jan 30 19:32:19 crc kubenswrapper[4712]: I0130 19:32:19.800108 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:32:19 crc kubenswrapper[4712]: E0130 19:32:19.800384 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:32:20 crc kubenswrapper[4712]: I0130 19:32:20.989325 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerStarted","Data":"eb580686446bcc5e69217f091c04a35f41cc7d04d6b7cc71f6122e563e1e9842"} Jan 30 19:32:21 crc kubenswrapper[4712]: I0130 19:32:21.019264 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c6zlh" podStartSLOduration=4.497673105 podStartE2EDuration="12.019243511s" podCreationTimestamp="2026-01-30 19:32:09 +0000 UTC" firstStartedPulling="2026-01-30 19:32:11.858028633 +0000 UTC m=+9468.765038102" lastFinishedPulling="2026-01-30 19:32:19.379599029 +0000 UTC m=+9476.286608508" observedRunningTime="2026-01-30 19:32:21.009248059 +0000 UTC m=+9477.916257538" watchObservedRunningTime="2026-01-30 19:32:21.019243511 +0000 UTC m=+9477.926252980" Jan 30 19:32:29 crc kubenswrapper[4712]: I0130 19:32:29.677376 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:29 crc kubenswrapper[4712]: I0130 19:32:29.678001 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:30 crc kubenswrapper[4712]: I0130 19:32:30.729693 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6zlh" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="registry-server" probeResult="failure" output=< Jan 30 19:32:30 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:32:30 crc kubenswrapper[4712]: > Jan 30 19:32:33 crc kubenswrapper[4712]: I0130 19:32:33.809175 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:32:33 crc kubenswrapper[4712]: E0130 19:32:33.809787 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:32:40 crc kubenswrapper[4712]: I0130 19:32:40.751782 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c6zlh" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="registry-server" probeResult="failure" output=< Jan 30 19:32:40 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:32:40 crc kubenswrapper[4712]: > Jan 30 19:32:47 crc kubenswrapper[4712]: I0130 19:32:47.799912 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:32:47 crc kubenswrapper[4712]: E0130 19:32:47.800590 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:32:49 crc kubenswrapper[4712]: I0130 19:32:49.751504 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:49 crc kubenswrapper[4712]: I0130 19:32:49.819574 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:49 crc kubenswrapper[4712]: I0130 19:32:49.985684 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6zlh"] Jan 30 19:32:51 crc kubenswrapper[4712]: I0130 19:32:51.277683 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c6zlh" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="registry-server" containerID="cri-o://eb580686446bcc5e69217f091c04a35f41cc7d04d6b7cc71f6122e563e1e9842" gracePeriod=2 Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.292756 4712 generic.go:334] "Generic (PLEG): container finished" podID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerID="eb580686446bcc5e69217f091c04a35f41cc7d04d6b7cc71f6122e563e1e9842" exitCode=0 Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.292825 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerDied","Data":"eb580686446bcc5e69217f091c04a35f41cc7d04d6b7cc71f6122e563e1e9842"} Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.837603 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.932118 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-catalog-content\") pod \"b6dda193-3f46-4e68-858d-9a8c2393acd3\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.932203 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-utilities\") pod \"b6dda193-3f46-4e68-858d-9a8c2393acd3\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.932229 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n29bt\" (UniqueName: \"kubernetes.io/projected/b6dda193-3f46-4e68-858d-9a8c2393acd3-kube-api-access-n29bt\") pod \"b6dda193-3f46-4e68-858d-9a8c2393acd3\" (UID: \"b6dda193-3f46-4e68-858d-9a8c2393acd3\") " Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.933678 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-utilities" (OuterVolumeSpecName: "utilities") pod "b6dda193-3f46-4e68-858d-9a8c2393acd3" (UID: "b6dda193-3f46-4e68-858d-9a8c2393acd3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:32:52 crc kubenswrapper[4712]: I0130 19:32:52.939908 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6dda193-3f46-4e68-858d-9a8c2393acd3-kube-api-access-n29bt" (OuterVolumeSpecName: "kube-api-access-n29bt") pod "b6dda193-3f46-4e68-858d-9a8c2393acd3" (UID: "b6dda193-3f46-4e68-858d-9a8c2393acd3"). InnerVolumeSpecName "kube-api-access-n29bt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.021643 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b6dda193-3f46-4e68-858d-9a8c2393acd3" (UID: "b6dda193-3f46-4e68-858d-9a8c2393acd3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.034526 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.034556 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6dda193-3f46-4e68-858d-9a8c2393acd3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.034567 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n29bt\" (UniqueName: \"kubernetes.io/projected/b6dda193-3f46-4e68-858d-9a8c2393acd3-kube-api-access-n29bt\") on node \"crc\" DevicePath \"\"" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.312033 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c6zlh" event={"ID":"b6dda193-3f46-4e68-858d-9a8c2393acd3","Type":"ContainerDied","Data":"6bf029cd7eddd43c4a48208b11e201e9591a0b74e44ca1d9227ef2c3e2074abd"} Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.312086 4712 scope.go:117] "RemoveContainer" containerID="eb580686446bcc5e69217f091c04a35f41cc7d04d6b7cc71f6122e563e1e9842" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.312088 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c6zlh" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.349259 4712 scope.go:117] "RemoveContainer" containerID="49fdfa8393daf8e7e3124581c41bb729f9a21904c6afaa257f21c1eadced501b" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.371060 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c6zlh"] Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.379553 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c6zlh"] Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.392115 4712 scope.go:117] "RemoveContainer" containerID="997389f5461ff45f56a5ffdfa59c474594a9c77b814cbc64d07776ff1d0564e8" Jan 30 19:32:53 crc kubenswrapper[4712]: I0130 19:32:53.815831 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" path="/var/lib/kubelet/pods/b6dda193-3f46-4e68-858d-9a8c2393acd3/volumes" Jan 30 19:33:02 crc kubenswrapper[4712]: I0130 19:33:02.800512 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:33:02 crc kubenswrapper[4712]: E0130 19:33:02.801274 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:33:14 crc kubenswrapper[4712]: I0130 19:33:14.800235 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:33:15 crc kubenswrapper[4712]: I0130 19:33:15.547197 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"b0b59dd12e8e2668cb2792082f7493bf71f4598d402f0dae8885c0b33a7f6e02"} Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.839480 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zl9l7"] Jan 30 19:33:54 crc kubenswrapper[4712]: E0130 19:33:54.840779 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="extract-utilities" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.840803 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="extract-utilities" Jan 30 19:33:54 crc kubenswrapper[4712]: E0130 19:33:54.840849 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="registry-server" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.840858 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="registry-server" Jan 30 19:33:54 crc kubenswrapper[4712]: E0130 19:33:54.840873 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="extract-content" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.840882 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="extract-content" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.841167 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6dda193-3f46-4e68-858d-9a8c2393acd3" containerName="registry-server" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.844569 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.878455 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl9l7"] Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.929884 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-catalog-content\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.930229 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9wcs\" (UniqueName: \"kubernetes.io/projected/65a712f0-f66c-4865-8d68-295cb0b8bd8e-kube-api-access-s9wcs\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:54 crc kubenswrapper[4712]: I0130 19:33:54.930265 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-utilities\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.032547 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-catalog-content\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.032806 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9wcs\" (UniqueName: \"kubernetes.io/projected/65a712f0-f66c-4865-8d68-295cb0b8bd8e-kube-api-access-s9wcs\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.032913 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-utilities\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.033200 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-catalog-content\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.033439 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-utilities\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.067348 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9wcs\" (UniqueName: \"kubernetes.io/projected/65a712f0-f66c-4865-8d68-295cb0b8bd8e-kube-api-access-s9wcs\") pod \"certified-operators-zl9l7\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.172240 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.717735 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zl9l7"] Jan 30 19:33:55 crc kubenswrapper[4712]: I0130 19:33:55.988420 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerStarted","Data":"c29e18d2a7b1d2fcad2e1d4b262f3d70c4a0f8fece49b63ec6e439d6ffe0c71c"} Jan 30 19:33:57 crc kubenswrapper[4712]: I0130 19:33:57.002540 4712 generic.go:334] "Generic (PLEG): container finished" podID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerID="86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e" exitCode=0 Jan 30 19:33:57 crc kubenswrapper[4712]: I0130 19:33:57.003214 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerDied","Data":"86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e"} Jan 30 19:33:59 crc kubenswrapper[4712]: I0130 19:33:59.030266 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerStarted","Data":"61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891"} Jan 30 19:34:01 crc kubenswrapper[4712]: I0130 19:34:01.054641 4712 generic.go:334] "Generic (PLEG): container finished" podID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerID="61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891" exitCode=0 Jan 30 19:34:01 crc kubenswrapper[4712]: I0130 19:34:01.054762 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerDied","Data":"61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891"} Jan 30 19:34:02 crc kubenswrapper[4712]: I0130 19:34:02.072083 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerStarted","Data":"7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834"} Jan 30 19:34:02 crc kubenswrapper[4712]: I0130 19:34:02.101746 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zl9l7" podStartSLOduration=3.611711873 podStartE2EDuration="8.101728223s" podCreationTimestamp="2026-01-30 19:33:54 +0000 UTC" firstStartedPulling="2026-01-30 19:33:57.008040896 +0000 UTC m=+9573.915050375" lastFinishedPulling="2026-01-30 19:34:01.498057256 +0000 UTC m=+9578.405066725" observedRunningTime="2026-01-30 19:34:02.095762129 +0000 UTC m=+9579.002771608" watchObservedRunningTime="2026-01-30 19:34:02.101728223 +0000 UTC m=+9579.008737702" Jan 30 19:34:05 crc kubenswrapper[4712]: I0130 19:34:05.173381 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:34:05 crc kubenswrapper[4712]: I0130 19:34:05.174527 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:34:06 crc kubenswrapper[4712]: I0130 19:34:06.216734 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zl9l7" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="registry-server" probeResult="failure" output=< Jan 30 19:34:06 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:34:06 crc kubenswrapper[4712]: > Jan 30 19:34:15 crc kubenswrapper[4712]: I0130 19:34:15.223749 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:34:15 crc kubenswrapper[4712]: I0130 19:34:15.294846 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:34:15 crc kubenswrapper[4712]: I0130 19:34:15.462143 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl9l7"] Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.212373 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zl9l7" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="registry-server" containerID="cri-o://7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834" gracePeriod=2 Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.769718 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.866226 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9wcs\" (UniqueName: \"kubernetes.io/projected/65a712f0-f66c-4865-8d68-295cb0b8bd8e-kube-api-access-s9wcs\") pod \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.866353 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-utilities\") pod \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.866481 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-catalog-content\") pod \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\" (UID: \"65a712f0-f66c-4865-8d68-295cb0b8bd8e\") " Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.867776 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-utilities" (OuterVolumeSpecName: "utilities") pod "65a712f0-f66c-4865-8d68-295cb0b8bd8e" (UID: "65a712f0-f66c-4865-8d68-295cb0b8bd8e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.873266 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/65a712f0-f66c-4865-8d68-295cb0b8bd8e-kube-api-access-s9wcs" (OuterVolumeSpecName: "kube-api-access-s9wcs") pod "65a712f0-f66c-4865-8d68-295cb0b8bd8e" (UID: "65a712f0-f66c-4865-8d68-295cb0b8bd8e"). InnerVolumeSpecName "kube-api-access-s9wcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.914875 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "65a712f0-f66c-4865-8d68-295cb0b8bd8e" (UID: "65a712f0-f66c-4865-8d68-295cb0b8bd8e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.969562 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.969595 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/65a712f0-f66c-4865-8d68-295cb0b8bd8e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:34:17 crc kubenswrapper[4712]: I0130 19:34:17.969605 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9wcs\" (UniqueName: \"kubernetes.io/projected/65a712f0-f66c-4865-8d68-295cb0b8bd8e-kube-api-access-s9wcs\") on node \"crc\" DevicePath \"\"" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.224576 4712 generic.go:334] "Generic (PLEG): container finished" podID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerID="7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834" exitCode=0 Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.224621 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerDied","Data":"7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834"} Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.224640 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zl9l7" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.224659 4712 scope.go:117] "RemoveContainer" containerID="7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.224647 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zl9l7" event={"ID":"65a712f0-f66c-4865-8d68-295cb0b8bd8e","Type":"ContainerDied","Data":"c29e18d2a7b1d2fcad2e1d4b262f3d70c4a0f8fece49b63ec6e439d6ffe0c71c"} Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.261231 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zl9l7"] Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.277428 4712 scope.go:117] "RemoveContainer" containerID="61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.277938 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zl9l7"] Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.329641 4712 scope.go:117] "RemoveContainer" containerID="86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.367492 4712 scope.go:117] "RemoveContainer" containerID="7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834" Jan 30 19:34:18 crc kubenswrapper[4712]: E0130 19:34:18.367817 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834\": container with ID starting with 7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834 not found: ID does not exist" containerID="7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.367845 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834"} err="failed to get container status \"7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834\": rpc error: code = NotFound desc = could not find container \"7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834\": container with ID starting with 7ad806d8898339dc1166ba6c6a918f0d04c44ebb4857efd4f9716d2c8c019834 not found: ID does not exist" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.367863 4712 scope.go:117] "RemoveContainer" containerID="61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891" Jan 30 19:34:18 crc kubenswrapper[4712]: E0130 19:34:18.368094 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891\": container with ID starting with 61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891 not found: ID does not exist" containerID="61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.368112 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891"} err="failed to get container status \"61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891\": rpc error: code = NotFound desc = could not find container \"61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891\": container with ID starting with 61e53f3364db05d3506580026214659d57a76f301dffff372cb593a3dc105891 not found: ID does not exist" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.368124 4712 scope.go:117] "RemoveContainer" containerID="86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e" Jan 30 19:34:18 crc kubenswrapper[4712]: E0130 19:34:18.368302 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e\": container with ID starting with 86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e not found: ID does not exist" containerID="86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e" Jan 30 19:34:18 crc kubenswrapper[4712]: I0130 19:34:18.368316 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e"} err="failed to get container status \"86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e\": rpc error: code = NotFound desc = could not find container \"86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e\": container with ID starting with 86119f2294cd2b29098760fb0a2a2f92decdbc885ed3936148ccd8e93d05c64e not found: ID does not exist" Jan 30 19:34:19 crc kubenswrapper[4712]: I0130 19:34:19.815351 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" path="/var/lib/kubelet/pods/65a712f0-f66c-4865-8d68-295cb0b8bd8e/volumes" Jan 30 19:35:36 crc kubenswrapper[4712]: I0130 19:35:36.271187 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:35:36 crc kubenswrapper[4712]: I0130 19:35:36.273393 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:36:06 crc kubenswrapper[4712]: I0130 19:36:06.275113 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:36:06 crc kubenswrapper[4712]: I0130 19:36:06.275738 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.273942 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.274741 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.275133 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.276778 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0b59dd12e8e2668cb2792082f7493bf71f4598d402f0dae8885c0b33a7f6e02"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.277033 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://b0b59dd12e8e2668cb2792082f7493bf71f4598d402f0dae8885c0b33a7f6e02" gracePeriod=600 Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.681932 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="b0b59dd12e8e2668cb2792082f7493bf71f4598d402f0dae8885c0b33a7f6e02" exitCode=0 Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.682008 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"b0b59dd12e8e2668cb2792082f7493bf71f4598d402f0dae8885c0b33a7f6e02"} Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.682278 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8"} Jan 30 19:36:36 crc kubenswrapper[4712]: I0130 19:36:36.682296 4712 scope.go:117] "RemoveContainer" containerID="6a889e7319743d48cc17eaf2cf4494f46c54ed34e204e063c57d043a9f8356be" Jan 30 19:38:36 crc kubenswrapper[4712]: I0130 19:38:36.271954 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:38:36 crc kubenswrapper[4712]: I0130 19:38:36.272589 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:39:06 crc kubenswrapper[4712]: I0130 19:39:06.271496 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:39:06 crc kubenswrapper[4712]: I0130 19:39:06.272406 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.270588 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.271247 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.271291 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.272068 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.272127 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" gracePeriod=600 Jan 30 19:39:36 crc kubenswrapper[4712]: E0130 19:39:36.756688 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.781008 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" exitCode=0 Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.781047 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8"} Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.781079 4712 scope.go:117] "RemoveContainer" containerID="b0b59dd12e8e2668cb2792082f7493bf71f4598d402f0dae8885c0b33a7f6e02" Jan 30 19:39:36 crc kubenswrapper[4712]: I0130 19:39:36.781647 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:39:36 crc kubenswrapper[4712]: E0130 19:39:36.781943 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:39:51 crc kubenswrapper[4712]: I0130 19:39:51.800220 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:39:51 crc kubenswrapper[4712]: E0130 19:39:51.801248 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:40:06 crc kubenswrapper[4712]: I0130 19:40:06.800206 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:40:06 crc kubenswrapper[4712]: E0130 19:40:06.801151 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:40:20 crc kubenswrapper[4712]: I0130 19:40:20.799717 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:40:20 crc kubenswrapper[4712]: E0130 19:40:20.800650 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:40:34 crc kubenswrapper[4712]: I0130 19:40:34.800101 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:40:34 crc kubenswrapper[4712]: E0130 19:40:34.800776 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:40:49 crc kubenswrapper[4712]: I0130 19:40:49.801699 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:40:49 crc kubenswrapper[4712]: E0130 19:40:49.802367 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.516752 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-rpn6q"] Jan 30 19:40:50 crc kubenswrapper[4712]: E0130 19:40:50.517517 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="extract-utilities" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.517546 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="extract-utilities" Jan 30 19:40:50 crc kubenswrapper[4712]: E0130 19:40:50.517568 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="extract-content" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.517581 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="extract-content" Jan 30 19:40:50 crc kubenswrapper[4712]: E0130 19:40:50.517624 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="registry-server" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.517638 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="registry-server" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.518120 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="65a712f0-f66c-4865-8d68-295cb0b8bd8e" containerName="registry-server" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.521313 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.531814 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpn6q"] Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.545090 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vzh6\" (UniqueName: \"kubernetes.io/projected/574400d9-05a8-4614-b463-11aa4da0fe5e-kube-api-access-4vzh6\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.545215 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-utilities\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.545264 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-catalog-content\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.647709 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vzh6\" (UniqueName: \"kubernetes.io/projected/574400d9-05a8-4614-b463-11aa4da0fe5e-kube-api-access-4vzh6\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.647968 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-utilities\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.648527 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-utilities\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.648660 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-catalog-content\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.649008 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-catalog-content\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.669076 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vzh6\" (UniqueName: \"kubernetes.io/projected/574400d9-05a8-4614-b463-11aa4da0fe5e-kube-api-access-4vzh6\") pod \"community-operators-rpn6q\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:50 crc kubenswrapper[4712]: I0130 19:40:50.850648 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:40:51 crc kubenswrapper[4712]: I0130 19:40:51.409658 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-rpn6q"] Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.187552 4712 generic.go:334] "Generic (PLEG): container finished" podID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerID="b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88" exitCode=0 Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.187630 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerDied","Data":"b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88"} Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.188684 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerStarted","Data":"986e1f0dff888d293de1a8df1f80b3c989b5591156d28880fe8ee7cfdbfaabf8"} Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.189694 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.906755 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nczkd"] Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.910720 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:52 crc kubenswrapper[4712]: I0130 19:40:52.923776 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nczkd"] Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.095224 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvsvf\" (UniqueName: \"kubernetes.io/projected/62e84a0e-3c1e-402c-86e3-007373aac6e5-kube-api-access-bvsvf\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.095373 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-catalog-content\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.095416 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-utilities\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.197096 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-utilities\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.197495 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-utilities\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.197991 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvsvf\" (UniqueName: \"kubernetes.io/projected/62e84a0e-3c1e-402c-86e3-007373aac6e5-kube-api-access-bvsvf\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.198434 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-catalog-content\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.198503 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerStarted","Data":"2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a"} Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.198753 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-catalog-content\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.218339 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvsvf\" (UniqueName: \"kubernetes.io/projected/62e84a0e-3c1e-402c-86e3-007373aac6e5-kube-api-access-bvsvf\") pod \"redhat-marketplace-nczkd\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.238469 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:40:53 crc kubenswrapper[4712]: I0130 19:40:53.781335 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nczkd"] Jan 30 19:40:54 crc kubenswrapper[4712]: I0130 19:40:54.212073 4712 generic.go:334] "Generic (PLEG): container finished" podID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerID="886f3c0033dc9f544dcf979e55c8c18b37f008db8d604d114d7e4e2ddd5f76f8" exitCode=0 Jan 30 19:40:54 crc kubenswrapper[4712]: I0130 19:40:54.212362 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerDied","Data":"886f3c0033dc9f544dcf979e55c8c18b37f008db8d604d114d7e4e2ddd5f76f8"} Jan 30 19:40:54 crc kubenswrapper[4712]: I0130 19:40:54.212523 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerStarted","Data":"40060fff91ee4ff33c34aba47fe0e8f1de2c4148ceac336943530161757b1bd1"} Jan 30 19:40:55 crc kubenswrapper[4712]: I0130 19:40:55.223481 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerStarted","Data":"cdb7d96c330ef8dfd0521fb772bc1d9c133f912518a12d1512cb4a9cd93b9f6c"} Jan 30 19:40:55 crc kubenswrapper[4712]: I0130 19:40:55.225263 4712 generic.go:334] "Generic (PLEG): container finished" podID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerID="2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a" exitCode=0 Jan 30 19:40:55 crc kubenswrapper[4712]: I0130 19:40:55.225292 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerDied","Data":"2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a"} Jan 30 19:40:56 crc kubenswrapper[4712]: I0130 19:40:56.238879 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerStarted","Data":"84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a"} Jan 30 19:40:56 crc kubenswrapper[4712]: I0130 19:40:56.270365 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-rpn6q" podStartSLOduration=2.706393042 podStartE2EDuration="6.270319967s" podCreationTimestamp="2026-01-30 19:40:50 +0000 UTC" firstStartedPulling="2026-01-30 19:40:52.189419177 +0000 UTC m=+9989.096428646" lastFinishedPulling="2026-01-30 19:40:55.753346112 +0000 UTC m=+9992.660355571" observedRunningTime="2026-01-30 19:40:56.269381685 +0000 UTC m=+9993.176391174" watchObservedRunningTime="2026-01-30 19:40:56.270319967 +0000 UTC m=+9993.177329436" Jan 30 19:40:57 crc kubenswrapper[4712]: I0130 19:40:57.250435 4712 generic.go:334] "Generic (PLEG): container finished" podID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerID="cdb7d96c330ef8dfd0521fb772bc1d9c133f912518a12d1512cb4a9cd93b9f6c" exitCode=0 Jan 30 19:40:57 crc kubenswrapper[4712]: I0130 19:40:57.250506 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerDied","Data":"cdb7d96c330ef8dfd0521fb772bc1d9c133f912518a12d1512cb4a9cd93b9f6c"} Jan 30 19:40:58 crc kubenswrapper[4712]: I0130 19:40:58.262890 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerStarted","Data":"527cd1bd981f9db194ba258d1b8e870d969ef655457a13c9713d010aa18c8821"} Jan 30 19:40:58 crc kubenswrapper[4712]: I0130 19:40:58.296861 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nczkd" podStartSLOduration=2.840498986 podStartE2EDuration="6.296840769s" podCreationTimestamp="2026-01-30 19:40:52 +0000 UTC" firstStartedPulling="2026-01-30 19:40:54.214206948 +0000 UTC m=+9991.121216417" lastFinishedPulling="2026-01-30 19:40:57.670548721 +0000 UTC m=+9994.577558200" observedRunningTime="2026-01-30 19:40:58.289329109 +0000 UTC m=+9995.196338598" watchObservedRunningTime="2026-01-30 19:40:58.296840769 +0000 UTC m=+9995.203850248" Jan 30 19:41:00 crc kubenswrapper[4712]: I0130 19:41:00.851548 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:41:00 crc kubenswrapper[4712]: I0130 19:41:00.852003 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:41:01 crc kubenswrapper[4712]: I0130 19:41:01.900942 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-rpn6q" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="registry-server" probeResult="failure" output=< Jan 30 19:41:01 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:41:01 crc kubenswrapper[4712]: > Jan 30 19:41:02 crc kubenswrapper[4712]: I0130 19:41:02.799916 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:41:02 crc kubenswrapper[4712]: E0130 19:41:02.800387 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:41:03 crc kubenswrapper[4712]: I0130 19:41:03.239591 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:41:03 crc kubenswrapper[4712]: I0130 19:41:03.242771 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:41:04 crc kubenswrapper[4712]: I0130 19:41:04.312145 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nczkd" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="registry-server" probeResult="failure" output=< Jan 30 19:41:04 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:41:04 crc kubenswrapper[4712]: > Jan 30 19:41:10 crc kubenswrapper[4712]: I0130 19:41:10.925258 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:41:10 crc kubenswrapper[4712]: I0130 19:41:10.989846 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:41:11 crc kubenswrapper[4712]: I0130 19:41:11.172514 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpn6q"] Jan 30 19:41:12 crc kubenswrapper[4712]: I0130 19:41:12.395531 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-rpn6q" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="registry-server" containerID="cri-o://84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a" gracePeriod=2 Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.029878 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.116256 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-catalog-content\") pod \"574400d9-05a8-4614-b463-11aa4da0fe5e\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.116548 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-utilities\") pod \"574400d9-05a8-4614-b463-11aa4da0fe5e\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.116694 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vzh6\" (UniqueName: \"kubernetes.io/projected/574400d9-05a8-4614-b463-11aa4da0fe5e-kube-api-access-4vzh6\") pod \"574400d9-05a8-4614-b463-11aa4da0fe5e\" (UID: \"574400d9-05a8-4614-b463-11aa4da0fe5e\") " Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.117063 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-utilities" (OuterVolumeSpecName: "utilities") pod "574400d9-05a8-4614-b463-11aa4da0fe5e" (UID: "574400d9-05a8-4614-b463-11aa4da0fe5e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.117477 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.122464 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/574400d9-05a8-4614-b463-11aa4da0fe5e-kube-api-access-4vzh6" (OuterVolumeSpecName: "kube-api-access-4vzh6") pod "574400d9-05a8-4614-b463-11aa4da0fe5e" (UID: "574400d9-05a8-4614-b463-11aa4da0fe5e"). InnerVolumeSpecName "kube-api-access-4vzh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.168154 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "574400d9-05a8-4614-b463-11aa4da0fe5e" (UID: "574400d9-05a8-4614-b463-11aa4da0fe5e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.219361 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vzh6\" (UniqueName: \"kubernetes.io/projected/574400d9-05a8-4614-b463-11aa4da0fe5e-kube-api-access-4vzh6\") on node \"crc\" DevicePath \"\"" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.219391 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/574400d9-05a8-4614-b463-11aa4da0fe5e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.286653 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.331843 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.406846 4712 generic.go:334] "Generic (PLEG): container finished" podID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerID="84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a" exitCode=0 Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.406916 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-rpn6q" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.406938 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerDied","Data":"84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a"} Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.406987 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-rpn6q" event={"ID":"574400d9-05a8-4614-b463-11aa4da0fe5e","Type":"ContainerDied","Data":"986e1f0dff888d293de1a8df1f80b3c989b5591156d28880fe8ee7cfdbfaabf8"} Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.407016 4712 scope.go:117] "RemoveContainer" containerID="84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.434775 4712 scope.go:117] "RemoveContainer" containerID="2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.449490 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-rpn6q"] Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.460833 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-rpn6q"] Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.474546 4712 scope.go:117] "RemoveContainer" containerID="b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.510002 4712 scope.go:117] "RemoveContainer" containerID="84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a" Jan 30 19:41:13 crc kubenswrapper[4712]: E0130 19:41:13.510480 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a\": container with ID starting with 84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a not found: ID does not exist" containerID="84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.510550 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a"} err="failed to get container status \"84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a\": rpc error: code = NotFound desc = could not find container \"84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a\": container with ID starting with 84b708edb4a1bcbbc25454f01a95fe8aae2da41639936ed79440279aab33ed7a not found: ID does not exist" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.510589 4712 scope.go:117] "RemoveContainer" containerID="2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a" Jan 30 19:41:13 crc kubenswrapper[4712]: E0130 19:41:13.511029 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a\": container with ID starting with 2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a not found: ID does not exist" containerID="2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.511066 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a"} err="failed to get container status \"2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a\": rpc error: code = NotFound desc = could not find container \"2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a\": container with ID starting with 2b7b769aa5d4c3985beebf0d31482851b037adc7ebcf57cd564f5f2fa6fe9a2a not found: ID does not exist" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.511105 4712 scope.go:117] "RemoveContainer" containerID="b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88" Jan 30 19:41:13 crc kubenswrapper[4712]: E0130 19:41:13.511626 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88\": container with ID starting with b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88 not found: ID does not exist" containerID="b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.511682 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88"} err="failed to get container status \"b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88\": rpc error: code = NotFound desc = could not find container \"b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88\": container with ID starting with b6b1802c6269ed1429ed38eb27648709fe64e6cb65dc73c8efd12843024e9e88 not found: ID does not exist" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.805818 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:41:13 crc kubenswrapper[4712]: E0130 19:41:13.806062 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:41:13 crc kubenswrapper[4712]: I0130 19:41:13.812984 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" path="/var/lib/kubelet/pods/574400d9-05a8-4614-b463-11aa4da0fe5e/volumes" Jan 30 19:41:15 crc kubenswrapper[4712]: I0130 19:41:15.572056 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nczkd"] Jan 30 19:41:15 crc kubenswrapper[4712]: I0130 19:41:15.572849 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nczkd" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="registry-server" containerID="cri-o://527cd1bd981f9db194ba258d1b8e870d969ef655457a13c9713d010aa18c8821" gracePeriod=2 Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.442981 4712 generic.go:334] "Generic (PLEG): container finished" podID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerID="527cd1bd981f9db194ba258d1b8e870d969ef655457a13c9713d010aa18c8821" exitCode=0 Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.443085 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerDied","Data":"527cd1bd981f9db194ba258d1b8e870d969ef655457a13c9713d010aa18c8821"} Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.443352 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nczkd" event={"ID":"62e84a0e-3c1e-402c-86e3-007373aac6e5","Type":"ContainerDied","Data":"40060fff91ee4ff33c34aba47fe0e8f1de2c4148ceac336943530161757b1bd1"} Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.443368 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40060fff91ee4ff33c34aba47fe0e8f1de2c4148ceac336943530161757b1bd1" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.704847 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.832032 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-catalog-content\") pod \"62e84a0e-3c1e-402c-86e3-007373aac6e5\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.832243 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-utilities\") pod \"62e84a0e-3c1e-402c-86e3-007373aac6e5\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.832447 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvsvf\" (UniqueName: \"kubernetes.io/projected/62e84a0e-3c1e-402c-86e3-007373aac6e5-kube-api-access-bvsvf\") pod \"62e84a0e-3c1e-402c-86e3-007373aac6e5\" (UID: \"62e84a0e-3c1e-402c-86e3-007373aac6e5\") " Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.832811 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-utilities" (OuterVolumeSpecName: "utilities") pod "62e84a0e-3c1e-402c-86e3-007373aac6e5" (UID: "62e84a0e-3c1e-402c-86e3-007373aac6e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.833630 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.844153 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62e84a0e-3c1e-402c-86e3-007373aac6e5-kube-api-access-bvsvf" (OuterVolumeSpecName: "kube-api-access-bvsvf") pod "62e84a0e-3c1e-402c-86e3-007373aac6e5" (UID: "62e84a0e-3c1e-402c-86e3-007373aac6e5"). InnerVolumeSpecName "kube-api-access-bvsvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.860294 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62e84a0e-3c1e-402c-86e3-007373aac6e5" (UID: "62e84a0e-3c1e-402c-86e3-007373aac6e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.935577 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvsvf\" (UniqueName: \"kubernetes.io/projected/62e84a0e-3c1e-402c-86e3-007373aac6e5-kube-api-access-bvsvf\") on node \"crc\" DevicePath \"\"" Jan 30 19:41:16 crc kubenswrapper[4712]: I0130 19:41:16.935974 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62e84a0e-3c1e-402c-86e3-007373aac6e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:41:17 crc kubenswrapper[4712]: I0130 19:41:17.452978 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nczkd" Jan 30 19:41:17 crc kubenswrapper[4712]: I0130 19:41:17.486348 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nczkd"] Jan 30 19:41:17 crc kubenswrapper[4712]: I0130 19:41:17.498624 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nczkd"] Jan 30 19:41:17 crc kubenswrapper[4712]: I0130 19:41:17.811977 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" path="/var/lib/kubelet/pods/62e84a0e-3c1e-402c-86e3-007373aac6e5/volumes" Jan 30 19:41:26 crc kubenswrapper[4712]: I0130 19:41:26.800503 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:41:26 crc kubenswrapper[4712]: E0130 19:41:26.801278 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:41:38 crc kubenswrapper[4712]: I0130 19:41:38.800134 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:41:38 crc kubenswrapper[4712]: E0130 19:41:38.800917 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:41:52 crc kubenswrapper[4712]: I0130 19:41:52.799959 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:41:52 crc kubenswrapper[4712]: E0130 19:41:52.800578 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:42:04 crc kubenswrapper[4712]: I0130 19:42:04.802359 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:42:04 crc kubenswrapper[4712]: E0130 19:42:04.803629 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:42:19 crc kubenswrapper[4712]: I0130 19:42:19.804760 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:42:19 crc kubenswrapper[4712]: E0130 19:42:19.805496 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.196377 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c2jvh"] Jan 30 19:42:24 crc kubenswrapper[4712]: E0130 19:42:24.197411 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="extract-content" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197424 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="extract-content" Jan 30 19:42:24 crc kubenswrapper[4712]: E0130 19:42:24.197433 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="registry-server" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197439 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="registry-server" Jan 30 19:42:24 crc kubenswrapper[4712]: E0130 19:42:24.197453 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="registry-server" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197460 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="registry-server" Jan 30 19:42:24 crc kubenswrapper[4712]: E0130 19:42:24.197470 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="extract-utilities" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197476 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="extract-utilities" Jan 30 19:42:24 crc kubenswrapper[4712]: E0130 19:42:24.197512 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="extract-content" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197517 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="extract-content" Jan 30 19:42:24 crc kubenswrapper[4712]: E0130 19:42:24.197528 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="extract-utilities" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197533 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="extract-utilities" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197722 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="62e84a0e-3c1e-402c-86e3-007373aac6e5" containerName="registry-server" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.197747 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="574400d9-05a8-4614-b463-11aa4da0fe5e" containerName="registry-server" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.199128 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.227707 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2jvh"] Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.268525 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-utilities\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.268689 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db56w\" (UniqueName: \"kubernetes.io/projected/f574d35e-8233-4fbf-8297-10772d412dd2-kube-api-access-db56w\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.268718 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-catalog-content\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.370851 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-utilities\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.371189 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-db56w\" (UniqueName: \"kubernetes.io/projected/f574d35e-8233-4fbf-8297-10772d412dd2-kube-api-access-db56w\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.371303 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-catalog-content\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.371538 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-utilities\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.371759 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-catalog-content\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.395655 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-db56w\" (UniqueName: \"kubernetes.io/projected/f574d35e-8233-4fbf-8297-10772d412dd2-kube-api-access-db56w\") pod \"redhat-operators-c2jvh\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:24 crc kubenswrapper[4712]: I0130 19:42:24.522917 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:25 crc kubenswrapper[4712]: I0130 19:42:25.010712 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c2jvh"] Jan 30 19:42:25 crc kubenswrapper[4712]: W0130 19:42:25.025461 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf574d35e_8233_4fbf_8297_10772d412dd2.slice/crio-d6ecfbd452d0ff1839347be533a254f9eba03097b76c8fa4ba75318fae291aeb WatchSource:0}: Error finding container d6ecfbd452d0ff1839347be533a254f9eba03097b76c8fa4ba75318fae291aeb: Status 404 returned error can't find the container with id d6ecfbd452d0ff1839347be533a254f9eba03097b76c8fa4ba75318fae291aeb Jan 30 19:42:25 crc kubenswrapper[4712]: I0130 19:42:25.153100 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerStarted","Data":"d6ecfbd452d0ff1839347be533a254f9eba03097b76c8fa4ba75318fae291aeb"} Jan 30 19:42:26 crc kubenswrapper[4712]: I0130 19:42:26.166877 4712 generic.go:334] "Generic (PLEG): container finished" podID="f574d35e-8233-4fbf-8297-10772d412dd2" containerID="7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc" exitCode=0 Jan 30 19:42:26 crc kubenswrapper[4712]: I0130 19:42:26.166972 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerDied","Data":"7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc"} Jan 30 19:42:27 crc kubenswrapper[4712]: I0130 19:42:27.177857 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerStarted","Data":"23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4"} Jan 30 19:42:33 crc kubenswrapper[4712]: I0130 19:42:33.255393 4712 generic.go:334] "Generic (PLEG): container finished" podID="f574d35e-8233-4fbf-8297-10772d412dd2" containerID="23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4" exitCode=0 Jan 30 19:42:33 crc kubenswrapper[4712]: I0130 19:42:33.255519 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerDied","Data":"23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4"} Jan 30 19:42:34 crc kubenswrapper[4712]: I0130 19:42:34.270414 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerStarted","Data":"012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84"} Jan 30 19:42:34 crc kubenswrapper[4712]: I0130 19:42:34.302167 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c2jvh" podStartSLOduration=2.782788197 podStartE2EDuration="10.302145932s" podCreationTimestamp="2026-01-30 19:42:24 +0000 UTC" firstStartedPulling="2026-01-30 19:42:26.17120571 +0000 UTC m=+10083.078215189" lastFinishedPulling="2026-01-30 19:42:33.690563445 +0000 UTC m=+10090.597572924" observedRunningTime="2026-01-30 19:42:34.294215822 +0000 UTC m=+10091.201225311" watchObservedRunningTime="2026-01-30 19:42:34.302145932 +0000 UTC m=+10091.209155401" Jan 30 19:42:34 crc kubenswrapper[4712]: I0130 19:42:34.523857 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:34 crc kubenswrapper[4712]: I0130 19:42:34.523917 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:42:34 crc kubenswrapper[4712]: I0130 19:42:34.800599 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:42:34 crc kubenswrapper[4712]: E0130 19:42:34.801144 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:42:35 crc kubenswrapper[4712]: I0130 19:42:35.574427 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2jvh" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" probeResult="failure" output=< Jan 30 19:42:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:42:35 crc kubenswrapper[4712]: > Jan 30 19:42:45 crc kubenswrapper[4712]: I0130 19:42:45.600662 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2jvh" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" probeResult="failure" output=< Jan 30 19:42:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:42:45 crc kubenswrapper[4712]: > Jan 30 19:42:48 crc kubenswrapper[4712]: I0130 19:42:48.800046 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:42:48 crc kubenswrapper[4712]: E0130 19:42:48.800841 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:42:55 crc kubenswrapper[4712]: I0130 19:42:55.601363 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c2jvh" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" probeResult="failure" output=< Jan 30 19:42:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:42:55 crc kubenswrapper[4712]: > Jan 30 19:43:00 crc kubenswrapper[4712]: I0130 19:43:00.800370 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:43:00 crc kubenswrapper[4712]: E0130 19:43:00.802643 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:43:04 crc kubenswrapper[4712]: I0130 19:43:04.716357 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:43:04 crc kubenswrapper[4712]: I0130 19:43:04.781420 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:43:04 crc kubenswrapper[4712]: I0130 19:43:04.955091 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2jvh"] Jan 30 19:43:06 crc kubenswrapper[4712]: I0130 19:43:06.605447 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c2jvh" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" containerID="cri-o://012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84" gracePeriod=2 Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.066359 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.201268 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-utilities\") pod \"f574d35e-8233-4fbf-8297-10772d412dd2\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.201344 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-catalog-content\") pod \"f574d35e-8233-4fbf-8297-10772d412dd2\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.201380 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-db56w\" (UniqueName: \"kubernetes.io/projected/f574d35e-8233-4fbf-8297-10772d412dd2-kube-api-access-db56w\") pod \"f574d35e-8233-4fbf-8297-10772d412dd2\" (UID: \"f574d35e-8233-4fbf-8297-10772d412dd2\") " Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.202590 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-utilities" (OuterVolumeSpecName: "utilities") pod "f574d35e-8233-4fbf-8297-10772d412dd2" (UID: "f574d35e-8233-4fbf-8297-10772d412dd2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.212858 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f574d35e-8233-4fbf-8297-10772d412dd2-kube-api-access-db56w" (OuterVolumeSpecName: "kube-api-access-db56w") pod "f574d35e-8233-4fbf-8297-10772d412dd2" (UID: "f574d35e-8233-4fbf-8297-10772d412dd2"). InnerVolumeSpecName "kube-api-access-db56w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.306008 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.306033 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-db56w\" (UniqueName: \"kubernetes.io/projected/f574d35e-8233-4fbf-8297-10772d412dd2-kube-api-access-db56w\") on node \"crc\" DevicePath \"\"" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.317273 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f574d35e-8233-4fbf-8297-10772d412dd2" (UID: "f574d35e-8233-4fbf-8297-10772d412dd2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.408232 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f574d35e-8233-4fbf-8297-10772d412dd2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.616231 4712 generic.go:334] "Generic (PLEG): container finished" podID="f574d35e-8233-4fbf-8297-10772d412dd2" containerID="012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84" exitCode=0 Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.616273 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerDied","Data":"012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84"} Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.616298 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c2jvh" event={"ID":"f574d35e-8233-4fbf-8297-10772d412dd2","Type":"ContainerDied","Data":"d6ecfbd452d0ff1839347be533a254f9eba03097b76c8fa4ba75318fae291aeb"} Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.616314 4712 scope.go:117] "RemoveContainer" containerID="012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.616424 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c2jvh" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.667104 4712 scope.go:117] "RemoveContainer" containerID="23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.671976 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c2jvh"] Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.683919 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c2jvh"] Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.711265 4712 scope.go:117] "RemoveContainer" containerID="7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.761736 4712 scope.go:117] "RemoveContainer" containerID="012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84" Jan 30 19:43:07 crc kubenswrapper[4712]: E0130 19:43:07.762283 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84\": container with ID starting with 012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84 not found: ID does not exist" containerID="012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.762321 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84"} err="failed to get container status \"012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84\": rpc error: code = NotFound desc = could not find container \"012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84\": container with ID starting with 012d38db07770bc6fd889679739665f618b6a1afc455a9e000ba20299f0a9a84 not found: ID does not exist" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.762350 4712 scope.go:117] "RemoveContainer" containerID="23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4" Jan 30 19:43:07 crc kubenswrapper[4712]: E0130 19:43:07.762659 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4\": container with ID starting with 23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4 not found: ID does not exist" containerID="23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.762689 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4"} err="failed to get container status \"23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4\": rpc error: code = NotFound desc = could not find container \"23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4\": container with ID starting with 23505daa49fb29cf0180753507384e509211b56e5323b8d6a6f3ea224b1db4b4 not found: ID does not exist" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.762707 4712 scope.go:117] "RemoveContainer" containerID="7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc" Jan 30 19:43:07 crc kubenswrapper[4712]: E0130 19:43:07.763070 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc\": container with ID starting with 7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc not found: ID does not exist" containerID="7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.763100 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc"} err="failed to get container status \"7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc\": rpc error: code = NotFound desc = could not find container \"7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc\": container with ID starting with 7c62f2ab00b4643b1c23e8d76123933c666d91ce590907c15028d537509f75dc not found: ID does not exist" Jan 30 19:43:07 crc kubenswrapper[4712]: I0130 19:43:07.812071 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" path="/var/lib/kubelet/pods/f574d35e-8233-4fbf-8297-10772d412dd2/volumes" Jan 30 19:43:12 crc kubenswrapper[4712]: I0130 19:43:12.800883 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:43:12 crc kubenswrapper[4712]: E0130 19:43:12.802745 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:43:24 crc kubenswrapper[4712]: I0130 19:43:24.800293 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:43:24 crc kubenswrapper[4712]: E0130 19:43:24.800996 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:43:35 crc kubenswrapper[4712]: I0130 19:43:35.800202 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:43:35 crc kubenswrapper[4712]: E0130 19:43:35.802018 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:43:49 crc kubenswrapper[4712]: I0130 19:43:49.801090 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:43:49 crc kubenswrapper[4712]: E0130 19:43:49.802111 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:44:03 crc kubenswrapper[4712]: I0130 19:44:03.839021 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:44:03 crc kubenswrapper[4712]: E0130 19:44:03.839943 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:44:14 crc kubenswrapper[4712]: I0130 19:44:14.799874 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:44:14 crc kubenswrapper[4712]: E0130 19:44:14.800749 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:44:28 crc kubenswrapper[4712]: I0130 19:44:28.800108 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:44:28 crc kubenswrapper[4712]: E0130 19:44:28.801338 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:44:39 crc kubenswrapper[4712]: I0130 19:44:39.803802 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:44:40 crc kubenswrapper[4712]: I0130 19:44:40.731860 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"b2d634b1aed3541b014253ef2c0ab0cf094a4fe0a36b2d3341d916eb07f25c62"} Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.165677 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt"] Jan 30 19:45:00 crc kubenswrapper[4712]: E0130 19:45:00.166651 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="extract-content" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.166665 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="extract-content" Jan 30 19:45:00 crc kubenswrapper[4712]: E0130 19:45:00.166693 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.166701 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" Jan 30 19:45:00 crc kubenswrapper[4712]: E0130 19:45:00.166710 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="extract-utilities" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.166716 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="extract-utilities" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.166898 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f574d35e-8233-4fbf-8297-10772d412dd2" containerName="registry-server" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.167581 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.178887 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.179071 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt"] Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.181039 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.272526 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20037386-6f8b-4998-ba1d-25a993410f6b-config-volume\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.272632 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20037386-6f8b-4998-ba1d-25a993410f6b-secret-volume\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.272656 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fbgf\" (UniqueName: \"kubernetes.io/projected/20037386-6f8b-4998-ba1d-25a993410f6b-kube-api-access-5fbgf\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.374563 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20037386-6f8b-4998-ba1d-25a993410f6b-secret-volume\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.374842 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fbgf\" (UniqueName: \"kubernetes.io/projected/20037386-6f8b-4998-ba1d-25a993410f6b-kube-api-access-5fbgf\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.375111 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20037386-6f8b-4998-ba1d-25a993410f6b-config-volume\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.376068 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20037386-6f8b-4998-ba1d-25a993410f6b-config-volume\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.386912 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20037386-6f8b-4998-ba1d-25a993410f6b-secret-volume\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.393046 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fbgf\" (UniqueName: \"kubernetes.io/projected/20037386-6f8b-4998-ba1d-25a993410f6b-kube-api-access-5fbgf\") pod \"collect-profiles-29496705-hnzzt\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:00 crc kubenswrapper[4712]: I0130 19:45:00.511998 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:01 crc kubenswrapper[4712]: I0130 19:45:01.007027 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt"] Jan 30 19:45:01 crc kubenswrapper[4712]: I0130 19:45:01.971734 4712 generic.go:334] "Generic (PLEG): container finished" podID="20037386-6f8b-4998-ba1d-25a993410f6b" containerID="a692591e809d643e500bb66a726b0c6e98ff6ae17ceeff131e822a521a1009bb" exitCode=0 Jan 30 19:45:01 crc kubenswrapper[4712]: I0130 19:45:01.971879 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" event={"ID":"20037386-6f8b-4998-ba1d-25a993410f6b","Type":"ContainerDied","Data":"a692591e809d643e500bb66a726b0c6e98ff6ae17ceeff131e822a521a1009bb"} Jan 30 19:45:01 crc kubenswrapper[4712]: I0130 19:45:01.972122 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" event={"ID":"20037386-6f8b-4998-ba1d-25a993410f6b","Type":"ContainerStarted","Data":"d8149098bf3eb482d1a660e5a6be7acc62f0db574e94b4c7f590fa450094c574"} Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.368668 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.539931 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20037386-6f8b-4998-ba1d-25a993410f6b-secret-volume\") pod \"20037386-6f8b-4998-ba1d-25a993410f6b\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.540008 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fbgf\" (UniqueName: \"kubernetes.io/projected/20037386-6f8b-4998-ba1d-25a993410f6b-kube-api-access-5fbgf\") pod \"20037386-6f8b-4998-ba1d-25a993410f6b\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.541281 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20037386-6f8b-4998-ba1d-25a993410f6b-config-volume\") pod \"20037386-6f8b-4998-ba1d-25a993410f6b\" (UID: \"20037386-6f8b-4998-ba1d-25a993410f6b\") " Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.541966 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20037386-6f8b-4998-ba1d-25a993410f6b-config-volume" (OuterVolumeSpecName: "config-volume") pod "20037386-6f8b-4998-ba1d-25a993410f6b" (UID: "20037386-6f8b-4998-ba1d-25a993410f6b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.542325 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/20037386-6f8b-4998-ba1d-25a993410f6b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.546785 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20037386-6f8b-4998-ba1d-25a993410f6b-kube-api-access-5fbgf" (OuterVolumeSpecName: "kube-api-access-5fbgf") pod "20037386-6f8b-4998-ba1d-25a993410f6b" (UID: "20037386-6f8b-4998-ba1d-25a993410f6b"). InnerVolumeSpecName "kube-api-access-5fbgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.548113 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20037386-6f8b-4998-ba1d-25a993410f6b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "20037386-6f8b-4998-ba1d-25a993410f6b" (UID: "20037386-6f8b-4998-ba1d-25a993410f6b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.644302 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/20037386-6f8b-4998-ba1d-25a993410f6b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.644334 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fbgf\" (UniqueName: \"kubernetes.io/projected/20037386-6f8b-4998-ba1d-25a993410f6b-kube-api-access-5fbgf\") on node \"crc\" DevicePath \"\"" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.997543 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" event={"ID":"20037386-6f8b-4998-ba1d-25a993410f6b","Type":"ContainerDied","Data":"d8149098bf3eb482d1a660e5a6be7acc62f0db574e94b4c7f590fa450094c574"} Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.997591 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d8149098bf3eb482d1a660e5a6be7acc62f0db574e94b4c7f590fa450094c574" Jan 30 19:45:03 crc kubenswrapper[4712]: I0130 19:45:03.997673 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt" Jan 30 19:45:04 crc kubenswrapper[4712]: I0130 19:45:04.458370 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw"] Jan 30 19:45:04 crc kubenswrapper[4712]: I0130 19:45:04.464993 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496660-q2gtw"] Jan 30 19:45:05 crc kubenswrapper[4712]: I0130 19:45:05.810438 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8156dff9-7413-4d68-b0d4-8ce3ad14d768" path="/var/lib/kubelet/pods/8156dff9-7413-4d68-b0d4-8ce3ad14d768/volumes" Jan 30 19:46:05 crc kubenswrapper[4712]: I0130 19:46:05.324430 4712 scope.go:117] "RemoveContainer" containerID="adeddce85d8795cccc57d650c6e18c15dda08423afea21ef37c4e0ee45270764" Jan 30 19:47:05 crc kubenswrapper[4712]: I0130 19:47:05.413160 4712 scope.go:117] "RemoveContainer" containerID="527cd1bd981f9db194ba258d1b8e870d969ef655457a13c9713d010aa18c8821" Jan 30 19:47:05 crc kubenswrapper[4712]: I0130 19:47:05.441165 4712 scope.go:117] "RemoveContainer" containerID="886f3c0033dc9f544dcf979e55c8c18b37f008db8d604d114d7e4e2ddd5f76f8" Jan 30 19:47:05 crc kubenswrapper[4712]: I0130 19:47:05.476572 4712 scope.go:117] "RemoveContainer" containerID="cdb7d96c330ef8dfd0521fb772bc1d9c133f912518a12d1512cb4a9cd93b9f6c" Jan 30 19:47:06 crc kubenswrapper[4712]: I0130 19:47:06.271139 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:47:06 crc kubenswrapper[4712]: I0130 19:47:06.271486 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.122417 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v9zpk"] Jan 30 19:47:17 crc kubenswrapper[4712]: E0130 19:47:17.123223 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20037386-6f8b-4998-ba1d-25a993410f6b" containerName="collect-profiles" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.123235 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="20037386-6f8b-4998-ba1d-25a993410f6b" containerName="collect-profiles" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.123422 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="20037386-6f8b-4998-ba1d-25a993410f6b" containerName="collect-profiles" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.128949 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.164426 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9zpk"] Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.243857 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-utilities\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.243921 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-catalog-content\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.243978 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c8rk\" (UniqueName: \"kubernetes.io/projected/bce89198-a3ac-48ce-8b20-6f8f13e079de-kube-api-access-6c8rk\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.345977 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-utilities\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.346037 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-catalog-content\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.346095 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6c8rk\" (UniqueName: \"kubernetes.io/projected/bce89198-a3ac-48ce-8b20-6f8f13e079de-kube-api-access-6c8rk\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.346727 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-utilities\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.346736 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-catalog-content\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.369374 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6c8rk\" (UniqueName: \"kubernetes.io/projected/bce89198-a3ac-48ce-8b20-6f8f13e079de-kube-api-access-6c8rk\") pod \"certified-operators-v9zpk\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.478051 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:17 crc kubenswrapper[4712]: I0130 19:47:17.998395 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v9zpk"] Jan 30 19:47:18 crc kubenswrapper[4712]: I0130 19:47:18.465150 4712 generic.go:334] "Generic (PLEG): container finished" podID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerID="f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b" exitCode=0 Jan 30 19:47:18 crc kubenswrapper[4712]: I0130 19:47:18.465214 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerDied","Data":"f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b"} Jan 30 19:47:18 crc kubenswrapper[4712]: I0130 19:47:18.465501 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerStarted","Data":"58d21d9956986443e2f5004dd1cfede0e0502688f0fa7fffdcab2c8499b9ee6b"} Jan 30 19:47:18 crc kubenswrapper[4712]: I0130 19:47:18.466619 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:47:19 crc kubenswrapper[4712]: I0130 19:47:19.474744 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerStarted","Data":"7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b"} Jan 30 19:47:21 crc kubenswrapper[4712]: E0130 19:47:21.345282 4712 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.246:56512->38.102.83.246:35825: write tcp 38.102.83.246:56512->38.102.83.246:35825: write: broken pipe Jan 30 19:47:22 crc kubenswrapper[4712]: I0130 19:47:22.505000 4712 generic.go:334] "Generic (PLEG): container finished" podID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerID="7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b" exitCode=0 Jan 30 19:47:22 crc kubenswrapper[4712]: I0130 19:47:22.505354 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerDied","Data":"7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b"} Jan 30 19:47:24 crc kubenswrapper[4712]: I0130 19:47:24.527286 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerStarted","Data":"25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96"} Jan 30 19:47:24 crc kubenswrapper[4712]: I0130 19:47:24.556286 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v9zpk" podStartSLOduration=2.653721575 podStartE2EDuration="7.556266312s" podCreationTimestamp="2026-01-30 19:47:17 +0000 UTC" firstStartedPulling="2026-01-30 19:47:18.46643088 +0000 UTC m=+10375.373440349" lastFinishedPulling="2026-01-30 19:47:23.368975597 +0000 UTC m=+10380.275985086" observedRunningTime="2026-01-30 19:47:24.554183372 +0000 UTC m=+10381.461192861" watchObservedRunningTime="2026-01-30 19:47:24.556266312 +0000 UTC m=+10381.463275801" Jan 30 19:47:27 crc kubenswrapper[4712]: I0130 19:47:27.479032 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:27 crc kubenswrapper[4712]: I0130 19:47:27.479416 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:28 crc kubenswrapper[4712]: I0130 19:47:28.544305 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v9zpk" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="registry-server" probeResult="failure" output=< Jan 30 19:47:28 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:47:28 crc kubenswrapper[4712]: > Jan 30 19:47:36 crc kubenswrapper[4712]: I0130 19:47:36.271331 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:47:36 crc kubenswrapper[4712]: I0130 19:47:36.271925 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:47:38 crc kubenswrapper[4712]: I0130 19:47:38.527632 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v9zpk" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="registry-server" probeResult="failure" output=< Jan 30 19:47:38 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:47:38 crc kubenswrapper[4712]: > Jan 30 19:47:47 crc kubenswrapper[4712]: I0130 19:47:47.817759 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:47 crc kubenswrapper[4712]: I0130 19:47:47.872313 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:48 crc kubenswrapper[4712]: I0130 19:47:48.323335 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9zpk"] Jan 30 19:47:49 crc kubenswrapper[4712]: I0130 19:47:49.790970 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-v9zpk" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="registry-server" containerID="cri-o://25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96" gracePeriod=2 Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.411874 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.568558 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-utilities\") pod \"bce89198-a3ac-48ce-8b20-6f8f13e079de\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.569143 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-catalog-content\") pod \"bce89198-a3ac-48ce-8b20-6f8f13e079de\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.569419 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6c8rk\" (UniqueName: \"kubernetes.io/projected/bce89198-a3ac-48ce-8b20-6f8f13e079de-kube-api-access-6c8rk\") pod \"bce89198-a3ac-48ce-8b20-6f8f13e079de\" (UID: \"bce89198-a3ac-48ce-8b20-6f8f13e079de\") " Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.570522 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-utilities" (OuterVolumeSpecName: "utilities") pod "bce89198-a3ac-48ce-8b20-6f8f13e079de" (UID: "bce89198-a3ac-48ce-8b20-6f8f13e079de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.577162 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce89198-a3ac-48ce-8b20-6f8f13e079de-kube-api-access-6c8rk" (OuterVolumeSpecName: "kube-api-access-6c8rk") pod "bce89198-a3ac-48ce-8b20-6f8f13e079de" (UID: "bce89198-a3ac-48ce-8b20-6f8f13e079de"). InnerVolumeSpecName "kube-api-access-6c8rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.626867 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bce89198-a3ac-48ce-8b20-6f8f13e079de" (UID: "bce89198-a3ac-48ce-8b20-6f8f13e079de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.672775 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.672822 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bce89198-a3ac-48ce-8b20-6f8f13e079de-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.672835 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6c8rk\" (UniqueName: \"kubernetes.io/projected/bce89198-a3ac-48ce-8b20-6f8f13e079de-kube-api-access-6c8rk\") on node \"crc\" DevicePath \"\"" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.805045 4712 generic.go:334] "Generic (PLEG): container finished" podID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerID="25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96" exitCode=0 Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.805085 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerDied","Data":"25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96"} Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.805113 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v9zpk" event={"ID":"bce89198-a3ac-48ce-8b20-6f8f13e079de","Type":"ContainerDied","Data":"58d21d9956986443e2f5004dd1cfede0e0502688f0fa7fffdcab2c8499b9ee6b"} Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.805129 4712 scope.go:117] "RemoveContainer" containerID="25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.805198 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v9zpk" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.843379 4712 scope.go:117] "RemoveContainer" containerID="7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.886043 4712 scope.go:117] "RemoveContainer" containerID="f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.897472 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-v9zpk"] Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.924568 4712 scope.go:117] "RemoveContainer" containerID="25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.926908 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-v9zpk"] Jan 30 19:47:50 crc kubenswrapper[4712]: E0130 19:47:50.927052 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96\": container with ID starting with 25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96 not found: ID does not exist" containerID="25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.927258 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96"} err="failed to get container status \"25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96\": rpc error: code = NotFound desc = could not find container \"25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96\": container with ID starting with 25c7d99db82a84939ced0f2b308b84429f87c1c2b6f33f86793e2c7b60c89c96 not found: ID does not exist" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.927409 4712 scope.go:117] "RemoveContainer" containerID="7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b" Jan 30 19:47:50 crc kubenswrapper[4712]: E0130 19:47:50.927873 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b\": container with ID starting with 7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b not found: ID does not exist" containerID="7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.927924 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b"} err="failed to get container status \"7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b\": rpc error: code = NotFound desc = could not find container \"7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b\": container with ID starting with 7b187578cb50961a497622ab3065e5fa2e53cb3ca3ffecc1fe0b751c2c44871b not found: ID does not exist" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.927948 4712 scope.go:117] "RemoveContainer" containerID="f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b" Jan 30 19:47:50 crc kubenswrapper[4712]: E0130 19:47:50.928407 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b\": container with ID starting with f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b not found: ID does not exist" containerID="f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b" Jan 30 19:47:50 crc kubenswrapper[4712]: I0130 19:47:50.928430 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b"} err="failed to get container status \"f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b\": rpc error: code = NotFound desc = could not find container \"f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b\": container with ID starting with f38a95687164349870c37badcb26b1428d1fba018eeaeb823abe33aa219dcb4b not found: ID does not exist" Jan 30 19:47:51 crc kubenswrapper[4712]: I0130 19:47:51.816360 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" path="/var/lib/kubelet/pods/bce89198-a3ac-48ce-8b20-6f8f13e079de/volumes" Jan 30 19:48:06 crc kubenswrapper[4712]: I0130 19:48:06.270685 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:48:06 crc kubenswrapper[4712]: I0130 19:48:06.271233 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:48:06 crc kubenswrapper[4712]: I0130 19:48:06.271282 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:48:06 crc kubenswrapper[4712]: I0130 19:48:06.272063 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b2d634b1aed3541b014253ef2c0ab0cf094a4fe0a36b2d3341d916eb07f25c62"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:48:06 crc kubenswrapper[4712]: I0130 19:48:06.272130 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://b2d634b1aed3541b014253ef2c0ab0cf094a4fe0a36b2d3341d916eb07f25c62" gracePeriod=600 Jan 30 19:48:07 crc kubenswrapper[4712]: I0130 19:48:07.005088 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="b2d634b1aed3541b014253ef2c0ab0cf094a4fe0a36b2d3341d916eb07f25c62" exitCode=0 Jan 30 19:48:07 crc kubenswrapper[4712]: I0130 19:48:07.005337 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"b2d634b1aed3541b014253ef2c0ab0cf094a4fe0a36b2d3341d916eb07f25c62"} Jan 30 19:48:07 crc kubenswrapper[4712]: I0130 19:48:07.005696 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f"} Jan 30 19:48:07 crc kubenswrapper[4712]: I0130 19:48:07.005723 4712 scope.go:117] "RemoveContainer" containerID="2c8b560d207a3359d63ff4a34832deb7e088d954e1791890f611bfcc0b7209d8" Jan 30 19:50:06 crc kubenswrapper[4712]: I0130 19:50:06.271244 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:50:06 crc kubenswrapper[4712]: I0130 19:50:06.271712 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:50:36 crc kubenswrapper[4712]: I0130 19:50:36.271871 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:50:36 crc kubenswrapper[4712]: I0130 19:50:36.272470 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:51:06 crc kubenswrapper[4712]: I0130 19:51:06.271332 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:51:06 crc kubenswrapper[4712]: I0130 19:51:06.273014 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:51:06 crc kubenswrapper[4712]: I0130 19:51:06.273106 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:51:06 crc kubenswrapper[4712]: I0130 19:51:06.273961 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:51:06 crc kubenswrapper[4712]: I0130 19:51:06.274037 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" gracePeriod=600 Jan 30 19:51:06 crc kubenswrapper[4712]: E0130 19:51:06.399296 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:51:07 crc kubenswrapper[4712]: I0130 19:51:07.039490 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" exitCode=0 Jan 30 19:51:07 crc kubenswrapper[4712]: I0130 19:51:07.039551 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f"} Jan 30 19:51:07 crc kubenswrapper[4712]: I0130 19:51:07.039588 4712 scope.go:117] "RemoveContainer" containerID="b2d634b1aed3541b014253ef2c0ab0cf094a4fe0a36b2d3341d916eb07f25c62" Jan 30 19:51:07 crc kubenswrapper[4712]: I0130 19:51:07.040512 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:51:07 crc kubenswrapper[4712]: E0130 19:51:07.042391 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:51:18 crc kubenswrapper[4712]: I0130 19:51:18.800592 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:51:18 crc kubenswrapper[4712]: E0130 19:51:18.801516 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:51:30 crc kubenswrapper[4712]: I0130 19:51:30.799750 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:51:30 crc kubenswrapper[4712]: E0130 19:51:30.800506 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:51:33 crc kubenswrapper[4712]: I0130 19:51:33.999381 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mvb4t"] Jan 30 19:51:34 crc kubenswrapper[4712]: E0130 19:51:34.000533 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="extract-content" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.000555 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="extract-content" Jan 30 19:51:34 crc kubenswrapper[4712]: E0130 19:51:34.000608 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="registry-server" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.000623 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="registry-server" Jan 30 19:51:34 crc kubenswrapper[4712]: E0130 19:51:34.000648 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="extract-utilities" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.000660 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="extract-utilities" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.001004 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce89198-a3ac-48ce-8b20-6f8f13e079de" containerName="registry-server" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.002851 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.018842 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvb4t"] Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.179144 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt49v\" (UniqueName: \"kubernetes.io/projected/440ac44f-cdc7-4fce-8217-95bc7748c54e-kube-api-access-jt49v\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.179316 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-utilities\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.179372 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-catalog-content\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.281116 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jt49v\" (UniqueName: \"kubernetes.io/projected/440ac44f-cdc7-4fce-8217-95bc7748c54e-kube-api-access-jt49v\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.281737 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-utilities\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.281882 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-catalog-content\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.282240 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-catalog-content\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.282451 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-utilities\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.312529 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt49v\" (UniqueName: \"kubernetes.io/projected/440ac44f-cdc7-4fce-8217-95bc7748c54e-kube-api-access-jt49v\") pod \"redhat-marketplace-mvb4t\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.323969 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:34 crc kubenswrapper[4712]: I0130 19:51:34.818431 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvb4t"] Jan 30 19:51:34 crc kubenswrapper[4712]: W0130 19:51:34.827600 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod440ac44f_cdc7_4fce_8217_95bc7748c54e.slice/crio-0bba06aa7856ad2d6ad89f227a8b3f7f1719409d13b92112fdb8d426d2362de7 WatchSource:0}: Error finding container 0bba06aa7856ad2d6ad89f227a8b3f7f1719409d13b92112fdb8d426d2362de7: Status 404 returned error can't find the container with id 0bba06aa7856ad2d6ad89f227a8b3f7f1719409d13b92112fdb8d426d2362de7 Jan 30 19:51:35 crc kubenswrapper[4712]: I0130 19:51:35.309336 4712 generic.go:334] "Generic (PLEG): container finished" podID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerID="a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b" exitCode=0 Jan 30 19:51:35 crc kubenswrapper[4712]: I0130 19:51:35.309544 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerDied","Data":"a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b"} Jan 30 19:51:35 crc kubenswrapper[4712]: I0130 19:51:35.309604 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerStarted","Data":"0bba06aa7856ad2d6ad89f227a8b3f7f1719409d13b92112fdb8d426d2362de7"} Jan 30 19:51:37 crc kubenswrapper[4712]: I0130 19:51:37.334134 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerStarted","Data":"62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd"} Jan 30 19:51:38 crc kubenswrapper[4712]: I0130 19:51:38.349633 4712 generic.go:334] "Generic (PLEG): container finished" podID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerID="62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd" exitCode=0 Jan 30 19:51:38 crc kubenswrapper[4712]: I0130 19:51:38.349698 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerDied","Data":"62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd"} Jan 30 19:51:39 crc kubenswrapper[4712]: I0130 19:51:39.361523 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerStarted","Data":"b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a"} Jan 30 19:51:39 crc kubenswrapper[4712]: I0130 19:51:39.396066 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mvb4t" podStartSLOduration=2.954483284 podStartE2EDuration="6.396043355s" podCreationTimestamp="2026-01-30 19:51:33 +0000 UTC" firstStartedPulling="2026-01-30 19:51:35.312316407 +0000 UTC m=+10632.219325876" lastFinishedPulling="2026-01-30 19:51:38.753876468 +0000 UTC m=+10635.660885947" observedRunningTime="2026-01-30 19:51:39.381377774 +0000 UTC m=+10636.288387273" watchObservedRunningTime="2026-01-30 19:51:39.396043355 +0000 UTC m=+10636.303052834" Jan 30 19:51:41 crc kubenswrapper[4712]: I0130 19:51:41.800829 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:51:41 crc kubenswrapper[4712]: E0130 19:51:41.801730 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:51:44 crc kubenswrapper[4712]: I0130 19:51:44.324755 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:44 crc kubenswrapper[4712]: I0130 19:51:44.325114 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:44 crc kubenswrapper[4712]: I0130 19:51:44.384918 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:44 crc kubenswrapper[4712]: I0130 19:51:44.470182 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:44 crc kubenswrapper[4712]: I0130 19:51:44.626506 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvb4t"] Jan 30 19:51:46 crc kubenswrapper[4712]: I0130 19:51:46.424490 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mvb4t" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="registry-server" containerID="cri-o://b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a" gracePeriod=2 Jan 30 19:51:46 crc kubenswrapper[4712]: I0130 19:51:46.916223 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.050353 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-utilities\") pod \"440ac44f-cdc7-4fce-8217-95bc7748c54e\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.050526 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-catalog-content\") pod \"440ac44f-cdc7-4fce-8217-95bc7748c54e\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.050558 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jt49v\" (UniqueName: \"kubernetes.io/projected/440ac44f-cdc7-4fce-8217-95bc7748c54e-kube-api-access-jt49v\") pod \"440ac44f-cdc7-4fce-8217-95bc7748c54e\" (UID: \"440ac44f-cdc7-4fce-8217-95bc7748c54e\") " Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.052467 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-utilities" (OuterVolumeSpecName: "utilities") pod "440ac44f-cdc7-4fce-8217-95bc7748c54e" (UID: "440ac44f-cdc7-4fce-8217-95bc7748c54e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.070786 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "440ac44f-cdc7-4fce-8217-95bc7748c54e" (UID: "440ac44f-cdc7-4fce-8217-95bc7748c54e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.152526 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.152555 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/440ac44f-cdc7-4fce-8217-95bc7748c54e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.434429 4712 generic.go:334] "Generic (PLEG): container finished" podID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerID="b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a" exitCode=0 Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.434468 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mvb4t" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.434550 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerDied","Data":"b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a"} Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.434662 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mvb4t" event={"ID":"440ac44f-cdc7-4fce-8217-95bc7748c54e","Type":"ContainerDied","Data":"0bba06aa7856ad2d6ad89f227a8b3f7f1719409d13b92112fdb8d426d2362de7"} Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.434883 4712 scope.go:117] "RemoveContainer" containerID="b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.461534 4712 scope.go:117] "RemoveContainer" containerID="62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.552009 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440ac44f-cdc7-4fce-8217-95bc7748c54e-kube-api-access-jt49v" (OuterVolumeSpecName: "kube-api-access-jt49v") pod "440ac44f-cdc7-4fce-8217-95bc7748c54e" (UID: "440ac44f-cdc7-4fce-8217-95bc7748c54e"). InnerVolumeSpecName "kube-api-access-jt49v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.564449 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jt49v\" (UniqueName: \"kubernetes.io/projected/440ac44f-cdc7-4fce-8217-95bc7748c54e-kube-api-access-jt49v\") on node \"crc\" DevicePath \"\"" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.564563 4712 scope.go:117] "RemoveContainer" containerID="a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.585145 4712 scope.go:117] "RemoveContainer" containerID="b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a" Jan 30 19:51:47 crc kubenswrapper[4712]: E0130 19:51:47.585565 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a\": container with ID starting with b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a not found: ID does not exist" containerID="b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.585595 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a"} err="failed to get container status \"b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a\": rpc error: code = NotFound desc = could not find container \"b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a\": container with ID starting with b9e5ee2815a9afd2ea195626cdb0ae664d55b43af09867b2e1e45f4554ffa00a not found: ID does not exist" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.585615 4712 scope.go:117] "RemoveContainer" containerID="62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd" Jan 30 19:51:47 crc kubenswrapper[4712]: E0130 19:51:47.586027 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd\": container with ID starting with 62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd not found: ID does not exist" containerID="62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.586050 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd"} err="failed to get container status \"62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd\": rpc error: code = NotFound desc = could not find container \"62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd\": container with ID starting with 62839b9d40bab0bbc8027d34454fb22d2d8e567d9f2d8a3361676d8eb15e69bd not found: ID does not exist" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.586080 4712 scope.go:117] "RemoveContainer" containerID="a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b" Jan 30 19:51:47 crc kubenswrapper[4712]: E0130 19:51:47.586423 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b\": container with ID starting with a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b not found: ID does not exist" containerID="a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.586445 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b"} err="failed to get container status \"a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b\": rpc error: code = NotFound desc = could not find container \"a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b\": container with ID starting with a99468cee2b0b5184edc47233dccaf0771a33e2870dd755d07d60b9da662627b not found: ID does not exist" Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.767984 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvb4t"] Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.775971 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mvb4t"] Jan 30 19:51:47 crc kubenswrapper[4712]: I0130 19:51:47.810737 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" path="/var/lib/kubelet/pods/440ac44f-cdc7-4fce-8217-95bc7748c54e/volumes" Jan 30 19:51:53 crc kubenswrapper[4712]: I0130 19:51:53.806333 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:51:53 crc kubenswrapper[4712]: E0130 19:51:53.807068 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:52:07 crc kubenswrapper[4712]: I0130 19:52:07.800937 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:52:07 crc kubenswrapper[4712]: E0130 19:52:07.803464 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:52:19 crc kubenswrapper[4712]: I0130 19:52:19.800484 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:52:19 crc kubenswrapper[4712]: E0130 19:52:19.801474 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:52:31 crc kubenswrapper[4712]: I0130 19:52:31.800171 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:52:31 crc kubenswrapper[4712]: E0130 19:52:31.800839 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:52:43 crc kubenswrapper[4712]: I0130 19:52:43.810468 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:52:43 crc kubenswrapper[4712]: E0130 19:52:43.811233 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:52:58 crc kubenswrapper[4712]: I0130 19:52:58.799566 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:52:58 crc kubenswrapper[4712]: E0130 19:52:58.800481 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:53:12 crc kubenswrapper[4712]: I0130 19:53:12.800377 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:53:12 crc kubenswrapper[4712]: E0130 19:53:12.801559 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.093914 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6s74f"] Jan 30 19:53:21 crc kubenswrapper[4712]: E0130 19:53:21.095192 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="extract-content" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.095212 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="extract-content" Jan 30 19:53:21 crc kubenswrapper[4712]: E0130 19:53:21.095228 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="registry-server" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.095237 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="registry-server" Jan 30 19:53:21 crc kubenswrapper[4712]: E0130 19:53:21.095253 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="extract-utilities" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.095260 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="extract-utilities" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.095470 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="440ac44f-cdc7-4fce-8217-95bc7748c54e" containerName="registry-server" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.097119 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.118609 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6s74f"] Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.207693 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-utilities\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.207888 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-catalog-content\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.207927 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgbng\" (UniqueName: \"kubernetes.io/projected/4552723e-db14-4e1b-9e2a-a59e4c0be843-kube-api-access-vgbng\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.309329 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-utilities\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.309459 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-catalog-content\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.309494 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgbng\" (UniqueName: \"kubernetes.io/projected/4552723e-db14-4e1b-9e2a-a59e4c0be843-kube-api-access-vgbng\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.310070 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-utilities\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.310131 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-catalog-content\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.342697 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgbng\" (UniqueName: \"kubernetes.io/projected/4552723e-db14-4e1b-9e2a-a59e4c0be843-kube-api-access-vgbng\") pod \"community-operators-6s74f\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:21 crc kubenswrapper[4712]: I0130 19:53:21.427472 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:22 crc kubenswrapper[4712]: I0130 19:53:22.037028 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6s74f"] Jan 30 19:53:22 crc kubenswrapper[4712]: I0130 19:53:22.519332 4712 generic.go:334] "Generic (PLEG): container finished" podID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerID="72f5f91411b4ba3052192cec86b638f67e3e7a99464f6b3944ae5fd4065a35e5" exitCode=0 Jan 30 19:53:22 crc kubenswrapper[4712]: I0130 19:53:22.519378 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerDied","Data":"72f5f91411b4ba3052192cec86b638f67e3e7a99464f6b3944ae5fd4065a35e5"} Jan 30 19:53:22 crc kubenswrapper[4712]: I0130 19:53:22.519707 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerStarted","Data":"1423bebe98c0445049e6c1c2908c2e8fb042665583004006e70fa3d3f4e31cca"} Jan 30 19:53:22 crc kubenswrapper[4712]: I0130 19:53:22.521430 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.093954 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6xsq2"] Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.098220 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.106160 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xsq2"] Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.173578 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-utilities\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.173712 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-catalog-content\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.173869 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9z6tz\" (UniqueName: \"kubernetes.io/projected/62a5b704-c514-4ff7-963b-f263e6ca3ea8-kube-api-access-9z6tz\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.275553 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-utilities\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.275631 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-catalog-content\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.275674 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9z6tz\" (UniqueName: \"kubernetes.io/projected/62a5b704-c514-4ff7-963b-f263e6ca3ea8-kube-api-access-9z6tz\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.276419 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-utilities\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.276623 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-catalog-content\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.308374 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9z6tz\" (UniqueName: \"kubernetes.io/projected/62a5b704-c514-4ff7-963b-f263e6ca3ea8-kube-api-access-9z6tz\") pod \"redhat-operators-6xsq2\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.423136 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.541924 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerStarted","Data":"8406ad3c0c1d2a7abfd8b646c5d357be9ccf67f4dbae1919263f305b4cb40893"} Jan 30 19:53:24 crc kubenswrapper[4712]: I0130 19:53:24.951470 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6xsq2"] Jan 30 19:53:25 crc kubenswrapper[4712]: I0130 19:53:25.550359 4712 generic.go:334] "Generic (PLEG): container finished" podID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerID="c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb" exitCode=0 Jan 30 19:53:25 crc kubenswrapper[4712]: I0130 19:53:25.550551 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerDied","Data":"c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb"} Jan 30 19:53:25 crc kubenswrapper[4712]: I0130 19:53:25.550771 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerStarted","Data":"9e2717a365af22918f7d9be8cded49c3d152be569f7281befeab3c5e267733e7"} Jan 30 19:53:25 crc kubenswrapper[4712]: I0130 19:53:25.555044 4712 generic.go:334] "Generic (PLEG): container finished" podID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerID="8406ad3c0c1d2a7abfd8b646c5d357be9ccf67f4dbae1919263f305b4cb40893" exitCode=0 Jan 30 19:53:25 crc kubenswrapper[4712]: I0130 19:53:25.555098 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerDied","Data":"8406ad3c0c1d2a7abfd8b646c5d357be9ccf67f4dbae1919263f305b4cb40893"} Jan 30 19:53:26 crc kubenswrapper[4712]: I0130 19:53:26.565519 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerStarted","Data":"629ceaa8f726bb48f6cba4e260f25358ee96818292c07a2defbd2312661e7b8e"} Jan 30 19:53:26 crc kubenswrapper[4712]: I0130 19:53:26.572694 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerStarted","Data":"3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a"} Jan 30 19:53:26 crc kubenswrapper[4712]: I0130 19:53:26.598128 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6s74f" podStartSLOduration=2.146613476 podStartE2EDuration="5.598097113s" podCreationTimestamp="2026-01-30 19:53:21 +0000 UTC" firstStartedPulling="2026-01-30 19:53:22.521167428 +0000 UTC m=+10739.428176897" lastFinishedPulling="2026-01-30 19:53:25.972651055 +0000 UTC m=+10742.879660534" observedRunningTime="2026-01-30 19:53:26.596619888 +0000 UTC m=+10743.503629367" watchObservedRunningTime="2026-01-30 19:53:26.598097113 +0000 UTC m=+10743.505106632" Jan 30 19:53:26 crc kubenswrapper[4712]: I0130 19:53:26.800566 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:53:26 crc kubenswrapper[4712]: E0130 19:53:26.800870 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:53:31 crc kubenswrapper[4712]: I0130 19:53:31.428653 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:31 crc kubenswrapper[4712]: I0130 19:53:31.429234 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:32 crc kubenswrapper[4712]: I0130 19:53:32.479543 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-6s74f" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="registry-server" probeResult="failure" output=< Jan 30 19:53:32 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:53:32 crc kubenswrapper[4712]: > Jan 30 19:53:32 crc kubenswrapper[4712]: I0130 19:53:32.637301 4712 generic.go:334] "Generic (PLEG): container finished" podID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerID="3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a" exitCode=0 Jan 30 19:53:32 crc kubenswrapper[4712]: I0130 19:53:32.637345 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerDied","Data":"3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a"} Jan 30 19:53:33 crc kubenswrapper[4712]: I0130 19:53:33.649269 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerStarted","Data":"353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4"} Jan 30 19:53:33 crc kubenswrapper[4712]: I0130 19:53:33.672921 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6xsq2" podStartSLOduration=2.1475833460000002 podStartE2EDuration="9.672898965s" podCreationTimestamp="2026-01-30 19:53:24 +0000 UTC" firstStartedPulling="2026-01-30 19:53:25.55192422 +0000 UTC m=+10742.458933689" lastFinishedPulling="2026-01-30 19:53:33.077239829 +0000 UTC m=+10749.984249308" observedRunningTime="2026-01-30 19:53:33.671344288 +0000 UTC m=+10750.578353807" watchObservedRunningTime="2026-01-30 19:53:33.672898965 +0000 UTC m=+10750.579908434" Jan 30 19:53:34 crc kubenswrapper[4712]: I0130 19:53:34.423664 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:34 crc kubenswrapper[4712]: I0130 19:53:34.424079 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:53:35 crc kubenswrapper[4712]: I0130 19:53:35.474701 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xsq2" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" probeResult="failure" output=< Jan 30 19:53:35 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:53:35 crc kubenswrapper[4712]: > Jan 30 19:53:40 crc kubenswrapper[4712]: I0130 19:53:40.799596 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:53:40 crc kubenswrapper[4712]: E0130 19:53:40.800300 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:53:41 crc kubenswrapper[4712]: I0130 19:53:41.476068 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:41 crc kubenswrapper[4712]: I0130 19:53:41.532786 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:41 crc kubenswrapper[4712]: I0130 19:53:41.729540 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6s74f"] Jan 30 19:53:42 crc kubenswrapper[4712]: I0130 19:53:42.729571 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6s74f" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="registry-server" containerID="cri-o://629ceaa8f726bb48f6cba4e260f25358ee96818292c07a2defbd2312661e7b8e" gracePeriod=2 Jan 30 19:53:43 crc kubenswrapper[4712]: I0130 19:53:43.765985 4712 generic.go:334] "Generic (PLEG): container finished" podID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerID="629ceaa8f726bb48f6cba4e260f25358ee96818292c07a2defbd2312661e7b8e" exitCode=0 Jan 30 19:53:43 crc kubenswrapper[4712]: I0130 19:53:43.766060 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerDied","Data":"629ceaa8f726bb48f6cba4e260f25358ee96818292c07a2defbd2312661e7b8e"} Jan 30 19:53:43 crc kubenswrapper[4712]: I0130 19:53:43.766423 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s74f" event={"ID":"4552723e-db14-4e1b-9e2a-a59e4c0be843","Type":"ContainerDied","Data":"1423bebe98c0445049e6c1c2908c2e8fb042665583004006e70fa3d3f4e31cca"} Jan 30 19:53:43 crc kubenswrapper[4712]: I0130 19:53:43.766439 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1423bebe98c0445049e6c1c2908c2e8fb042665583004006e70fa3d3f4e31cca" Jan 30 19:53:43 crc kubenswrapper[4712]: I0130 19:53:43.970829 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.039993 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-catalog-content\") pod \"4552723e-db14-4e1b-9e2a-a59e4c0be843\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.040065 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vgbng\" (UniqueName: \"kubernetes.io/projected/4552723e-db14-4e1b-9e2a-a59e4c0be843-kube-api-access-vgbng\") pod \"4552723e-db14-4e1b-9e2a-a59e4c0be843\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.040315 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-utilities\") pod \"4552723e-db14-4e1b-9e2a-a59e4c0be843\" (UID: \"4552723e-db14-4e1b-9e2a-a59e4c0be843\") " Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.042767 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-utilities" (OuterVolumeSpecName: "utilities") pod "4552723e-db14-4e1b-9e2a-a59e4c0be843" (UID: "4552723e-db14-4e1b-9e2a-a59e4c0be843"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.054500 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4552723e-db14-4e1b-9e2a-a59e4c0be843-kube-api-access-vgbng" (OuterVolumeSpecName: "kube-api-access-vgbng") pod "4552723e-db14-4e1b-9e2a-a59e4c0be843" (UID: "4552723e-db14-4e1b-9e2a-a59e4c0be843"). InnerVolumeSpecName "kube-api-access-vgbng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.134378 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4552723e-db14-4e1b-9e2a-a59e4c0be843" (UID: "4552723e-db14-4e1b-9e2a-a59e4c0be843"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.142283 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.142480 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4552723e-db14-4e1b-9e2a-a59e4c0be843-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.142543 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vgbng\" (UniqueName: \"kubernetes.io/projected/4552723e-db14-4e1b-9e2a-a59e4c0be843-kube-api-access-vgbng\") on node \"crc\" DevicePath \"\"" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.773640 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s74f" Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.808221 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6s74f"] Jan 30 19:53:44 crc kubenswrapper[4712]: I0130 19:53:44.817660 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6s74f"] Jan 30 19:53:45 crc kubenswrapper[4712]: I0130 19:53:45.472100 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xsq2" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" probeResult="failure" output=< Jan 30 19:53:45 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:53:45 crc kubenswrapper[4712]: > Jan 30 19:53:45 crc kubenswrapper[4712]: I0130 19:53:45.814454 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" path="/var/lib/kubelet/pods/4552723e-db14-4e1b-9e2a-a59e4c0be843/volumes" Jan 30 19:53:53 crc kubenswrapper[4712]: I0130 19:53:53.805755 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:53:53 crc kubenswrapper[4712]: E0130 19:53:53.806620 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:53:55 crc kubenswrapper[4712]: I0130 19:53:55.480759 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6xsq2" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" probeResult="failure" output=< Jan 30 19:53:55 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:53:55 crc kubenswrapper[4712]: > Jan 30 19:54:04 crc kubenswrapper[4712]: I0130 19:54:04.479627 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:54:04 crc kubenswrapper[4712]: I0130 19:54:04.536783 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:54:04 crc kubenswrapper[4712]: I0130 19:54:04.736990 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xsq2"] Jan 30 19:54:05 crc kubenswrapper[4712]: I0130 19:54:05.985645 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6xsq2" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" containerID="cri-o://353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4" gracePeriod=2 Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.485636 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.546898 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z6tz\" (UniqueName: \"kubernetes.io/projected/62a5b704-c514-4ff7-963b-f263e6ca3ea8-kube-api-access-9z6tz\") pod \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.547034 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-utilities\") pod \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.547165 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-catalog-content\") pod \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\" (UID: \"62a5b704-c514-4ff7-963b-f263e6ca3ea8\") " Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.547956 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-utilities" (OuterVolumeSpecName: "utilities") pod "62a5b704-c514-4ff7-963b-f263e6ca3ea8" (UID: "62a5b704-c514-4ff7-963b-f263e6ca3ea8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.558585 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62a5b704-c514-4ff7-963b-f263e6ca3ea8-kube-api-access-9z6tz" (OuterVolumeSpecName: "kube-api-access-9z6tz") pod "62a5b704-c514-4ff7-963b-f263e6ca3ea8" (UID: "62a5b704-c514-4ff7-963b-f263e6ca3ea8"). InnerVolumeSpecName "kube-api-access-9z6tz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.649931 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9z6tz\" (UniqueName: \"kubernetes.io/projected/62a5b704-c514-4ff7-963b-f263e6ca3ea8-kube-api-access-9z6tz\") on node \"crc\" DevicePath \"\"" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.649958 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.703062 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62a5b704-c514-4ff7-963b-f263e6ca3ea8" (UID: "62a5b704-c514-4ff7-963b-f263e6ca3ea8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.751552 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62a5b704-c514-4ff7-963b-f263e6ca3ea8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.799972 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:54:06 crc kubenswrapper[4712]: E0130 19:54:06.800239 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.995673 4712 generic.go:334] "Generic (PLEG): container finished" podID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerID="353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4" exitCode=0 Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.995715 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerDied","Data":"353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4"} Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.995772 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6xsq2" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.995786 4712 scope.go:117] "RemoveContainer" containerID="353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4" Jan 30 19:54:06 crc kubenswrapper[4712]: I0130 19:54:06.995775 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6xsq2" event={"ID":"62a5b704-c514-4ff7-963b-f263e6ca3ea8","Type":"ContainerDied","Data":"9e2717a365af22918f7d9be8cded49c3d152be569f7281befeab3c5e267733e7"} Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.030134 4712 scope.go:117] "RemoveContainer" containerID="3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.030902 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6xsq2"] Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.039552 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6xsq2"] Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.054636 4712 scope.go:117] "RemoveContainer" containerID="c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.101212 4712 scope.go:117] "RemoveContainer" containerID="353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4" Jan 30 19:54:07 crc kubenswrapper[4712]: E0130 19:54:07.101618 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4\": container with ID starting with 353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4 not found: ID does not exist" containerID="353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.101681 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4"} err="failed to get container status \"353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4\": rpc error: code = NotFound desc = could not find container \"353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4\": container with ID starting with 353b4fb98a60ba6da8529bbe082b1d14ba69186da44522dbb3fd4e99191f90e4 not found: ID does not exist" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.101720 4712 scope.go:117] "RemoveContainer" containerID="3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a" Jan 30 19:54:07 crc kubenswrapper[4712]: E0130 19:54:07.102077 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a\": container with ID starting with 3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a not found: ID does not exist" containerID="3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.102108 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a"} err="failed to get container status \"3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a\": rpc error: code = NotFound desc = could not find container \"3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a\": container with ID starting with 3c2fa1ac1fcbfcbe8bf42a3aac7a9ad2f9bdb9a095749f0062c413debe8add5a not found: ID does not exist" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.102130 4712 scope.go:117] "RemoveContainer" containerID="c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb" Jan 30 19:54:07 crc kubenswrapper[4712]: E0130 19:54:07.102495 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb\": container with ID starting with c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb not found: ID does not exist" containerID="c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.102513 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb"} err="failed to get container status \"c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb\": rpc error: code = NotFound desc = could not find container \"c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb\": container with ID starting with c92acac03ea2e1b38442464ed579d696ecf3efbbff1ea3ccbb6450f9318c56bb not found: ID does not exist" Jan 30 19:54:07 crc kubenswrapper[4712]: I0130 19:54:07.810250 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" path="/var/lib/kubelet/pods/62a5b704-c514-4ff7-963b-f263e6ca3ea8/volumes" Jan 30 19:54:20 crc kubenswrapper[4712]: I0130 19:54:20.801120 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:54:20 crc kubenswrapper[4712]: E0130 19:54:20.802170 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:54:31 crc kubenswrapper[4712]: I0130 19:54:31.800068 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:54:31 crc kubenswrapper[4712]: E0130 19:54:31.801389 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:54:43 crc kubenswrapper[4712]: I0130 19:54:43.806219 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:54:43 crc kubenswrapper[4712]: E0130 19:54:43.807191 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:54:55 crc kubenswrapper[4712]: I0130 19:54:55.799924 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:54:55 crc kubenswrapper[4712]: E0130 19:54:55.800509 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:55:08 crc kubenswrapper[4712]: I0130 19:55:08.800300 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:55:08 crc kubenswrapper[4712]: E0130 19:55:08.801348 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:55:21 crc kubenswrapper[4712]: I0130 19:55:21.802238 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:55:21 crc kubenswrapper[4712]: E0130 19:55:21.804324 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:55:32 crc kubenswrapper[4712]: I0130 19:55:32.800664 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:55:32 crc kubenswrapper[4712]: E0130 19:55:32.801334 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:55:47 crc kubenswrapper[4712]: I0130 19:55:47.801004 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:55:47 crc kubenswrapper[4712]: E0130 19:55:47.802605 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:56:00 crc kubenswrapper[4712]: I0130 19:56:00.799671 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:56:00 crc kubenswrapper[4712]: E0130 19:56:00.800740 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 19:56:12 crc kubenswrapper[4712]: I0130 19:56:12.799722 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 19:56:13 crc kubenswrapper[4712]: I0130 19:56:13.344784 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"3663632c4504a46d447b749d0bd94c2be7d4c5fa6615c22f15407718f34a371f"} Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.038673 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x79zp"] Jan 30 19:57:42 crc kubenswrapper[4712]: E0130 19:57:42.039773 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="registry-server" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.039817 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="registry-server" Jan 30 19:57:42 crc kubenswrapper[4712]: E0130 19:57:42.039846 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.039854 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" Jan 30 19:57:42 crc kubenswrapper[4712]: E0130 19:57:42.039877 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="extract-utilities" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.039887 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="extract-utilities" Jan 30 19:57:42 crc kubenswrapper[4712]: E0130 19:57:42.039903 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="extract-utilities" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.039911 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="extract-utilities" Jan 30 19:57:42 crc kubenswrapper[4712]: E0130 19:57:42.039925 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="extract-content" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.039931 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="extract-content" Jan 30 19:57:42 crc kubenswrapper[4712]: E0130 19:57:42.039943 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="extract-content" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.039952 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="extract-content" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.040181 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="4552723e-db14-4e1b-9e2a-a59e4c0be843" containerName="registry-server" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.040214 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="62a5b704-c514-4ff7-963b-f263e6ca3ea8" containerName="registry-server" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.041723 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.058810 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x79zp"] Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.136789 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-catalog-content\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.137040 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp4tr\" (UniqueName: \"kubernetes.io/projected/900f98a7-31dd-4553-ad51-d51e529cb17f-kube-api-access-rp4tr\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.137089 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-utilities\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.238526 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-catalog-content\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.238624 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rp4tr\" (UniqueName: \"kubernetes.io/projected/900f98a7-31dd-4553-ad51-d51e529cb17f-kube-api-access-rp4tr\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.238646 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-utilities\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.239177 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-utilities\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.239179 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-catalog-content\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.267143 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rp4tr\" (UniqueName: \"kubernetes.io/projected/900f98a7-31dd-4553-ad51-d51e529cb17f-kube-api-access-rp4tr\") pod \"certified-operators-x79zp\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.377343 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:42 crc kubenswrapper[4712]: I0130 19:57:42.905028 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x79zp"] Jan 30 19:57:43 crc kubenswrapper[4712]: I0130 19:57:43.875067 4712 generic.go:334] "Generic (PLEG): container finished" podID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerID="55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af" exitCode=0 Jan 30 19:57:43 crc kubenswrapper[4712]: I0130 19:57:43.875317 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerDied","Data":"55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af"} Jan 30 19:57:43 crc kubenswrapper[4712]: I0130 19:57:43.875756 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerStarted","Data":"b5f096ae552dbea39acc8304a4f61c55fb7c03c3d73ac355d63a4eb011b44f9f"} Jan 30 19:57:44 crc kubenswrapper[4712]: I0130 19:57:44.889626 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerStarted","Data":"93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607"} Jan 30 19:57:46 crc kubenswrapper[4712]: I0130 19:57:46.914527 4712 generic.go:334] "Generic (PLEG): container finished" podID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerID="93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607" exitCode=0 Jan 30 19:57:46 crc kubenswrapper[4712]: I0130 19:57:46.914620 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerDied","Data":"93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607"} Jan 30 19:57:47 crc kubenswrapper[4712]: I0130 19:57:47.927001 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerStarted","Data":"d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e"} Jan 30 19:57:47 crc kubenswrapper[4712]: I0130 19:57:47.956953 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x79zp" podStartSLOduration=2.507041593 podStartE2EDuration="5.956930819s" podCreationTimestamp="2026-01-30 19:57:42 +0000 UTC" firstStartedPulling="2026-01-30 19:57:43.882384336 +0000 UTC m=+11000.789393845" lastFinishedPulling="2026-01-30 19:57:47.332273602 +0000 UTC m=+11004.239283071" observedRunningTime="2026-01-30 19:57:47.950849122 +0000 UTC m=+11004.857858611" watchObservedRunningTime="2026-01-30 19:57:47.956930819 +0000 UTC m=+11004.863940298" Jan 30 19:57:52 crc kubenswrapper[4712]: I0130 19:57:52.378015 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:52 crc kubenswrapper[4712]: I0130 19:57:52.380041 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:57:53 crc kubenswrapper[4712]: I0130 19:57:53.440717 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-x79zp" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="registry-server" probeResult="failure" output=< Jan 30 19:57:53 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 19:57:53 crc kubenswrapper[4712]: > Jan 30 19:58:02 crc kubenswrapper[4712]: I0130 19:58:02.459537 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:58:02 crc kubenswrapper[4712]: I0130 19:58:02.541203 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:58:02 crc kubenswrapper[4712]: I0130 19:58:02.709578 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x79zp"] Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.088816 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x79zp" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="registry-server" containerID="cri-o://d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e" gracePeriod=2 Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.733659 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.867720 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-catalog-content\") pod \"900f98a7-31dd-4553-ad51-d51e529cb17f\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.867896 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rp4tr\" (UniqueName: \"kubernetes.io/projected/900f98a7-31dd-4553-ad51-d51e529cb17f-kube-api-access-rp4tr\") pod \"900f98a7-31dd-4553-ad51-d51e529cb17f\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.867986 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-utilities\") pod \"900f98a7-31dd-4553-ad51-d51e529cb17f\" (UID: \"900f98a7-31dd-4553-ad51-d51e529cb17f\") " Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.869933 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-utilities" (OuterVolumeSpecName: "utilities") pod "900f98a7-31dd-4553-ad51-d51e529cb17f" (UID: "900f98a7-31dd-4553-ad51-d51e529cb17f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.887965 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/900f98a7-31dd-4553-ad51-d51e529cb17f-kube-api-access-rp4tr" (OuterVolumeSpecName: "kube-api-access-rp4tr") pod "900f98a7-31dd-4553-ad51-d51e529cb17f" (UID: "900f98a7-31dd-4553-ad51-d51e529cb17f"). InnerVolumeSpecName "kube-api-access-rp4tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.922053 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "900f98a7-31dd-4553-ad51-d51e529cb17f" (UID: "900f98a7-31dd-4553-ad51-d51e529cb17f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.970270 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rp4tr\" (UniqueName: \"kubernetes.io/projected/900f98a7-31dd-4553-ad51-d51e529cb17f-kube-api-access-rp4tr\") on node \"crc\" DevicePath \"\"" Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.970300 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 19:58:04 crc kubenswrapper[4712]: I0130 19:58:04.970309 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/900f98a7-31dd-4553-ad51-d51e529cb17f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.101644 4712 generic.go:334] "Generic (PLEG): container finished" podID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerID="d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e" exitCode=0 Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.101705 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x79zp" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.101758 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerDied","Data":"d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e"} Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.103126 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x79zp" event={"ID":"900f98a7-31dd-4553-ad51-d51e529cb17f","Type":"ContainerDied","Data":"b5f096ae552dbea39acc8304a4f61c55fb7c03c3d73ac355d63a4eb011b44f9f"} Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.103158 4712 scope.go:117] "RemoveContainer" containerID="d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.146088 4712 scope.go:117] "RemoveContainer" containerID="93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.152195 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x79zp"] Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.167265 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x79zp"] Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.205241 4712 scope.go:117] "RemoveContainer" containerID="55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.225801 4712 scope.go:117] "RemoveContainer" containerID="d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e" Jan 30 19:58:05 crc kubenswrapper[4712]: E0130 19:58:05.226478 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e\": container with ID starting with d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e not found: ID does not exist" containerID="d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.226547 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e"} err="failed to get container status \"d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e\": rpc error: code = NotFound desc = could not find container \"d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e\": container with ID starting with d7acb1ffc00ccd885051ed0e5ac4883778722082c4c2d0c249d68313899afc0e not found: ID does not exist" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.226576 4712 scope.go:117] "RemoveContainer" containerID="93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607" Jan 30 19:58:05 crc kubenswrapper[4712]: E0130 19:58:05.227171 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607\": container with ID starting with 93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607 not found: ID does not exist" containerID="93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.227208 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607"} err="failed to get container status \"93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607\": rpc error: code = NotFound desc = could not find container \"93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607\": container with ID starting with 93e920af1bbe2ad8cfc0993a9ae4bf7a7bf5053bf3c721d7865713ebd82b0607 not found: ID does not exist" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.227241 4712 scope.go:117] "RemoveContainer" containerID="55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af" Jan 30 19:58:05 crc kubenswrapper[4712]: E0130 19:58:05.227485 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af\": container with ID starting with 55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af not found: ID does not exist" containerID="55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.227521 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af"} err="failed to get container status \"55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af\": rpc error: code = NotFound desc = could not find container \"55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af\": container with ID starting with 55375207a3e6f4fd19e91bef910e0be316430e2726500cf1232d93607a0e75af not found: ID does not exist" Jan 30 19:58:05 crc kubenswrapper[4712]: I0130 19:58:05.823770 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" path="/var/lib/kubelet/pods/900f98a7-31dd-4553-ad51-d51e529cb17f/volumes" Jan 30 19:58:36 crc kubenswrapper[4712]: I0130 19:58:36.272035 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:58:36 crc kubenswrapper[4712]: I0130 19:58:36.272752 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:59:06 crc kubenswrapper[4712]: I0130 19:59:06.270944 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:59:06 crc kubenswrapper[4712]: I0130 19:59:06.271472 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:59:36 crc kubenswrapper[4712]: I0130 19:59:36.270930 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 19:59:36 crc kubenswrapper[4712]: I0130 19:59:36.271763 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 19:59:36 crc kubenswrapper[4712]: I0130 19:59:36.271937 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 19:59:36 crc kubenswrapper[4712]: I0130 19:59:36.273304 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3663632c4504a46d447b749d0bd94c2be7d4c5fa6615c22f15407718f34a371f"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 19:59:36 crc kubenswrapper[4712]: I0130 19:59:36.273469 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://3663632c4504a46d447b749d0bd94c2be7d4c5fa6615c22f15407718f34a371f" gracePeriod=600 Jan 30 19:59:37 crc kubenswrapper[4712]: I0130 19:59:37.076719 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="3663632c4504a46d447b749d0bd94c2be7d4c5fa6615c22f15407718f34a371f" exitCode=0 Jan 30 19:59:37 crc kubenswrapper[4712]: I0130 19:59:37.076805 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"3663632c4504a46d447b749d0bd94c2be7d4c5fa6615c22f15407718f34a371f"} Jan 30 19:59:37 crc kubenswrapper[4712]: I0130 19:59:37.077141 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703"} Jan 30 19:59:37 crc kubenswrapper[4712]: I0130 19:59:37.077205 4712 scope.go:117] "RemoveContainer" containerID="b20dfc84036d077ad3a62b9837e2a9fb62cb43e5a13c9ed9a7ecbb288922bc2f" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.221772 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7"] Jan 30 20:00:00 crc kubenswrapper[4712]: E0130 20:00:00.222619 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="extract-utilities" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.222634 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="extract-utilities" Jan 30 20:00:00 crc kubenswrapper[4712]: E0130 20:00:00.222657 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="extract-content" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.222663 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="extract-content" Jan 30 20:00:00 crc kubenswrapper[4712]: E0130 20:00:00.222683 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="registry-server" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.222689 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="registry-server" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.222869 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="900f98a7-31dd-4553-ad51-d51e529cb17f" containerName="registry-server" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.223493 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.225053 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.226261 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.243123 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7"] Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.349136 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-secret-volume\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.349210 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjbmg\" (UniqueName: \"kubernetes.io/projected/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-kube-api-access-gjbmg\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.349238 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-config-volume\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.451208 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-secret-volume\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.451511 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjbmg\" (UniqueName: \"kubernetes.io/projected/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-kube-api-access-gjbmg\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.451608 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-config-volume\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.452714 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-config-volume\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.465917 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-secret-volume\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.469125 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjbmg\" (UniqueName: \"kubernetes.io/projected/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-kube-api-access-gjbmg\") pod \"collect-profiles-29496720-bw9j7\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:00 crc kubenswrapper[4712]: I0130 20:00:00.563302 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:01 crc kubenswrapper[4712]: W0130 20:00:01.310711 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcb8a1e9e_76cc_49ee_8d9c_d5113d1899fa.slice/crio-ec1d45828bd2b371ce43d2b1b349c54c3fa3c181d0e20bc5aeb27883021720f7 WatchSource:0}: Error finding container ec1d45828bd2b371ce43d2b1b349c54c3fa3c181d0e20bc5aeb27883021720f7: Status 404 returned error can't find the container with id ec1d45828bd2b371ce43d2b1b349c54c3fa3c181d0e20bc5aeb27883021720f7 Jan 30 20:00:01 crc kubenswrapper[4712]: I0130 20:00:01.318813 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7"] Jan 30 20:00:02 crc kubenswrapper[4712]: I0130 20:00:02.337876 4712 generic.go:334] "Generic (PLEG): container finished" podID="cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" containerID="5384591e9562d823773fd09b12d2bc09f9e8bea966be1e9ead3b4706dea7505b" exitCode=0 Jan 30 20:00:02 crc kubenswrapper[4712]: I0130 20:00:02.337959 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" event={"ID":"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa","Type":"ContainerDied","Data":"5384591e9562d823773fd09b12d2bc09f9e8bea966be1e9ead3b4706dea7505b"} Jan 30 20:00:02 crc kubenswrapper[4712]: I0130 20:00:02.338323 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" event={"ID":"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa","Type":"ContainerStarted","Data":"ec1d45828bd2b371ce43d2b1b349c54c3fa3c181d0e20bc5aeb27883021720f7"} Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.782070 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.920305 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjbmg\" (UniqueName: \"kubernetes.io/projected/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-kube-api-access-gjbmg\") pod \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.920505 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-secret-volume\") pod \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.920539 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-config-volume\") pod \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\" (UID: \"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa\") " Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.922096 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-config-volume" (OuterVolumeSpecName: "config-volume") pod "cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" (UID: "cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.925416 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.932996 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-kube-api-access-gjbmg" (OuterVolumeSpecName: "kube-api-access-gjbmg") pod "cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" (UID: "cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa"). InnerVolumeSpecName "kube-api-access-gjbmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:00:03 crc kubenswrapper[4712]: I0130 20:00:03.951917 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" (UID: "cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.026867 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.026902 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjbmg\" (UniqueName: \"kubernetes.io/projected/cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa-kube-api-access-gjbmg\") on node \"crc\" DevicePath \"\"" Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.358142 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" event={"ID":"cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa","Type":"ContainerDied","Data":"ec1d45828bd2b371ce43d2b1b349c54c3fa3c181d0e20bc5aeb27883021720f7"} Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.358474 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ec1d45828bd2b371ce43d2b1b349c54c3fa3c181d0e20bc5aeb27883021720f7" Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.358188 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496720-bw9j7" Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.910163 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z"] Jan 30 20:00:04 crc kubenswrapper[4712]: I0130 20:00:04.917774 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496675-szb4z"] Jan 30 20:00:05 crc kubenswrapper[4712]: I0130 20:00:05.815969 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9b1e620-fced-4dcd-b6eb-ab76e32c0301" path="/var/lib/kubelet/pods/c9b1e620-fced-4dcd-b6eb-ab76e32c0301/volumes" Jan 30 20:00:05 crc kubenswrapper[4712]: I0130 20:00:05.939203 4712 scope.go:117] "RemoveContainer" containerID="8406ad3c0c1d2a7abfd8b646c5d357be9ccf67f4dbae1919263f305b4cb40893" Jan 30 20:00:05 crc kubenswrapper[4712]: I0130 20:00:05.965031 4712 scope.go:117] "RemoveContainer" containerID="72f5f91411b4ba3052192cec86b638f67e3e7a99464f6b3944ae5fd4065a35e5" Jan 30 20:00:06 crc kubenswrapper[4712]: I0130 20:00:06.017227 4712 scope.go:117] "RemoveContainer" containerID="629ceaa8f726bb48f6cba4e260f25358ee96818292c07a2defbd2312661e7b8e" Jan 30 20:00:06 crc kubenswrapper[4712]: I0130 20:00:06.064705 4712 scope.go:117] "RemoveContainer" containerID="869427332ef2363a2c04b24728f9598dcdc0e4710dc5dd6ef1f84432a3497074" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.180595 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29496721-gss79"] Jan 30 20:01:00 crc kubenswrapper[4712]: E0130 20:01:00.181541 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" containerName="collect-profiles" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.181556 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" containerName="collect-profiles" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.181754 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb8a1e9e-76cc-49ee-8d9c-d5113d1899fa" containerName="collect-profiles" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.182379 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.207535 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496721-gss79"] Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.319550 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-config-data\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.319785 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp5fq\" (UniqueName: \"kubernetes.io/projected/a9d07708-613e-4ca3-a143-34a7158f2243-kube-api-access-vp5fq\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.319880 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-combined-ca-bundle\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.319916 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-fernet-keys\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.422155 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-config-data\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.422269 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp5fq\" (UniqueName: \"kubernetes.io/projected/a9d07708-613e-4ca3-a143-34a7158f2243-kube-api-access-vp5fq\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.422297 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-combined-ca-bundle\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.422328 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-fernet-keys\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.431382 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-fernet-keys\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.436263 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-combined-ca-bundle\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.437066 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-config-data\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.457906 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp5fq\" (UniqueName: \"kubernetes.io/projected/a9d07708-613e-4ca3-a143-34a7158f2243-kube-api-access-vp5fq\") pod \"keystone-cron-29496721-gss79\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:00 crc kubenswrapper[4712]: I0130 20:01:00.502358 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:01 crc kubenswrapper[4712]: I0130 20:01:01.677837 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29496721-gss79"] Jan 30 20:01:01 crc kubenswrapper[4712]: I0130 20:01:01.977538 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496721-gss79" event={"ID":"a9d07708-613e-4ca3-a143-34a7158f2243","Type":"ContainerStarted","Data":"97bba0a62ff0f05920a17a2e1123abeca094f63a954f07a4e862868e6184423b"} Jan 30 20:01:02 crc kubenswrapper[4712]: I0130 20:01:02.990096 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496721-gss79" event={"ID":"a9d07708-613e-4ca3-a143-34a7158f2243","Type":"ContainerStarted","Data":"e3e63772f6037b712c9ec1136d380e6f97fe3c855f7fee405526654069d40fe4"} Jan 30 20:01:07 crc kubenswrapper[4712]: I0130 20:01:07.034122 4712 generic.go:334] "Generic (PLEG): container finished" podID="a9d07708-613e-4ca3-a143-34a7158f2243" containerID="e3e63772f6037b712c9ec1136d380e6f97fe3c855f7fee405526654069d40fe4" exitCode=0 Jan 30 20:01:07 crc kubenswrapper[4712]: I0130 20:01:07.034205 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496721-gss79" event={"ID":"a9d07708-613e-4ca3-a143-34a7158f2243","Type":"ContainerDied","Data":"e3e63772f6037b712c9ec1136d380e6f97fe3c855f7fee405526654069d40fe4"} Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.596088 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.723102 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vp5fq\" (UniqueName: \"kubernetes.io/projected/a9d07708-613e-4ca3-a143-34a7158f2243-kube-api-access-vp5fq\") pod \"a9d07708-613e-4ca3-a143-34a7158f2243\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.723138 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-fernet-keys\") pod \"a9d07708-613e-4ca3-a143-34a7158f2243\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.723197 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-config-data\") pod \"a9d07708-613e-4ca3-a143-34a7158f2243\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.723283 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-combined-ca-bundle\") pod \"a9d07708-613e-4ca3-a143-34a7158f2243\" (UID: \"a9d07708-613e-4ca3-a143-34a7158f2243\") " Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.732113 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9d07708-613e-4ca3-a143-34a7158f2243-kube-api-access-vp5fq" (OuterVolumeSpecName: "kube-api-access-vp5fq") pod "a9d07708-613e-4ca3-a143-34a7158f2243" (UID: "a9d07708-613e-4ca3-a143-34a7158f2243"). InnerVolumeSpecName "kube-api-access-vp5fq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.741504 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "a9d07708-613e-4ca3-a143-34a7158f2243" (UID: "a9d07708-613e-4ca3-a143-34a7158f2243"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.775688 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a9d07708-613e-4ca3-a143-34a7158f2243" (UID: "a9d07708-613e-4ca3-a143-34a7158f2243"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.801959 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-config-data" (OuterVolumeSpecName: "config-data") pod "a9d07708-613e-4ca3-a143-34a7158f2243" (UID: "a9d07708-613e-4ca3-a143-34a7158f2243"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.825818 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vp5fq\" (UniqueName: \"kubernetes.io/projected/a9d07708-613e-4ca3-a143-34a7158f2243-kube-api-access-vp5fq\") on node \"crc\" DevicePath \"\"" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.825848 4712 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.825857 4712 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-config-data\") on node \"crc\" DevicePath \"\"" Jan 30 20:01:08 crc kubenswrapper[4712]: I0130 20:01:08.825865 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9d07708-613e-4ca3-a143-34a7158f2243-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 20:01:09 crc kubenswrapper[4712]: I0130 20:01:09.055488 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29496721-gss79" event={"ID":"a9d07708-613e-4ca3-a143-34a7158f2243","Type":"ContainerDied","Data":"97bba0a62ff0f05920a17a2e1123abeca094f63a954f07a4e862868e6184423b"} Jan 30 20:01:09 crc kubenswrapper[4712]: I0130 20:01:09.055531 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97bba0a62ff0f05920a17a2e1123abeca094f63a954f07a4e862868e6184423b" Jan 30 20:01:09 crc kubenswrapper[4712]: I0130 20:01:09.055586 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29496721-gss79" Jan 30 20:01:36 crc kubenswrapper[4712]: I0130 20:01:36.271585 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:01:36 crc kubenswrapper[4712]: I0130 20:01:36.272317 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.134276 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxnx"] Jan 30 20:02:03 crc kubenswrapper[4712]: E0130 20:02:03.135494 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9d07708-613e-4ca3-a143-34a7158f2243" containerName="keystone-cron" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.135516 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9d07708-613e-4ca3-a143-34a7158f2243" containerName="keystone-cron" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.135760 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9d07708-613e-4ca3-a143-34a7158f2243" containerName="keystone-cron" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.137505 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.167592 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxnx"] Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.274582 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhtgv\" (UniqueName: \"kubernetes.io/projected/e0bd1972-47be-4e29-9c06-aab6f3f6a425-kube-api-access-fhtgv\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.275360 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-catalog-content\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.275571 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-utilities\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.377259 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-catalog-content\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.377396 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-utilities\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.377471 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhtgv\" (UniqueName: \"kubernetes.io/projected/e0bd1972-47be-4e29-9c06-aab6f3f6a425-kube-api-access-fhtgv\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.377861 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-catalog-content\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.378177 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-utilities\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.401622 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhtgv\" (UniqueName: \"kubernetes.io/projected/e0bd1972-47be-4e29-9c06-aab6f3f6a425-kube-api-access-fhtgv\") pod \"redhat-marketplace-9bxnx\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:03 crc kubenswrapper[4712]: I0130 20:02:03.466358 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:04 crc kubenswrapper[4712]: I0130 20:02:04.004539 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxnx"] Jan 30 20:02:04 crc kubenswrapper[4712]: W0130 20:02:04.019775 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0bd1972_47be_4e29_9c06_aab6f3f6a425.slice/crio-676f907b87890c2a020954838e787c8240450e9427134899d47d153ec9a73271 WatchSource:0}: Error finding container 676f907b87890c2a020954838e787c8240450e9427134899d47d153ec9a73271: Status 404 returned error can't find the container with id 676f907b87890c2a020954838e787c8240450e9427134899d47d153ec9a73271 Jan 30 20:02:04 crc kubenswrapper[4712]: I0130 20:02:04.655081 4712 generic.go:334] "Generic (PLEG): container finished" podID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerID="e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e" exitCode=0 Jan 30 20:02:04 crc kubenswrapper[4712]: I0130 20:02:04.655112 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerDied","Data":"e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e"} Jan 30 20:02:04 crc kubenswrapper[4712]: I0130 20:02:04.655362 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerStarted","Data":"676f907b87890c2a020954838e787c8240450e9427134899d47d153ec9a73271"} Jan 30 20:02:04 crc kubenswrapper[4712]: I0130 20:02:04.663056 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 20:02:06 crc kubenswrapper[4712]: I0130 20:02:06.270654 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:02:06 crc kubenswrapper[4712]: I0130 20:02:06.271789 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:02:06 crc kubenswrapper[4712]: I0130 20:02:06.680015 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerStarted","Data":"532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1"} Jan 30 20:02:09 crc kubenswrapper[4712]: I0130 20:02:09.726174 4712 generic.go:334] "Generic (PLEG): container finished" podID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerID="532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1" exitCode=0 Jan 30 20:02:09 crc kubenswrapper[4712]: I0130 20:02:09.726255 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerDied","Data":"532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1"} Jan 30 20:02:10 crc kubenswrapper[4712]: I0130 20:02:10.743624 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerStarted","Data":"b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5"} Jan 30 20:02:10 crc kubenswrapper[4712]: I0130 20:02:10.768551 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9bxnx" podStartSLOduration=2.150123595 podStartE2EDuration="7.76853223s" podCreationTimestamp="2026-01-30 20:02:03 +0000 UTC" firstStartedPulling="2026-01-30 20:02:04.657329997 +0000 UTC m=+11261.564339476" lastFinishedPulling="2026-01-30 20:02:10.275738602 +0000 UTC m=+11267.182748111" observedRunningTime="2026-01-30 20:02:10.761299156 +0000 UTC m=+11267.668308665" watchObservedRunningTime="2026-01-30 20:02:10.76853223 +0000 UTC m=+11267.675541699" Jan 30 20:02:13 crc kubenswrapper[4712]: I0130 20:02:13.466961 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:13 crc kubenswrapper[4712]: I0130 20:02:13.467241 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:14 crc kubenswrapper[4712]: I0130 20:02:14.593189 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-9bxnx" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="registry-server" probeResult="failure" output=< Jan 30 20:02:14 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:02:14 crc kubenswrapper[4712]: > Jan 30 20:02:23 crc kubenswrapper[4712]: I0130 20:02:23.524782 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:23 crc kubenswrapper[4712]: I0130 20:02:23.609440 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:23 crc kubenswrapper[4712]: I0130 20:02:23.788289 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxnx"] Jan 30 20:02:24 crc kubenswrapper[4712]: I0130 20:02:24.929880 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9bxnx" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="registry-server" containerID="cri-o://b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5" gracePeriod=2 Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.525035 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.643980 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhtgv\" (UniqueName: \"kubernetes.io/projected/e0bd1972-47be-4e29-9c06-aab6f3f6a425-kube-api-access-fhtgv\") pod \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.644273 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-utilities\") pod \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.644318 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-catalog-content\") pod \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\" (UID: \"e0bd1972-47be-4e29-9c06-aab6f3f6a425\") " Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.645447 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-utilities" (OuterVolumeSpecName: "utilities") pod "e0bd1972-47be-4e29-9c06-aab6f3f6a425" (UID: "e0bd1972-47be-4e29-9c06-aab6f3f6a425"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.646678 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.649273 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bd1972-47be-4e29-9c06-aab6f3f6a425-kube-api-access-fhtgv" (OuterVolumeSpecName: "kube-api-access-fhtgv") pod "e0bd1972-47be-4e29-9c06-aab6f3f6a425" (UID: "e0bd1972-47be-4e29-9c06-aab6f3f6a425"). InnerVolumeSpecName "kube-api-access-fhtgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.682573 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e0bd1972-47be-4e29-9c06-aab6f3f6a425" (UID: "e0bd1972-47be-4e29-9c06-aab6f3f6a425"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.748748 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhtgv\" (UniqueName: \"kubernetes.io/projected/e0bd1972-47be-4e29-9c06-aab6f3f6a425-kube-api-access-fhtgv\") on node \"crc\" DevicePath \"\"" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.748812 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e0bd1972-47be-4e29-9c06-aab6f3f6a425-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.942565 4712 generic.go:334] "Generic (PLEG): container finished" podID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerID="b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5" exitCode=0 Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.942615 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerDied","Data":"b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5"} Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.942623 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9bxnx" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.942644 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9bxnx" event={"ID":"e0bd1972-47be-4e29-9c06-aab6f3f6a425","Type":"ContainerDied","Data":"676f907b87890c2a020954838e787c8240450e9427134899d47d153ec9a73271"} Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.942662 4712 scope.go:117] "RemoveContainer" containerID="b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.971477 4712 scope.go:117] "RemoveContainer" containerID="532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1" Jan 30 20:02:25 crc kubenswrapper[4712]: I0130 20:02:25.974836 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxnx"] Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.005470 4712 scope.go:117] "RemoveContainer" containerID="e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e" Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.014864 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9bxnx"] Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.048758 4712 scope.go:117] "RemoveContainer" containerID="b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5" Jan 30 20:02:26 crc kubenswrapper[4712]: E0130 20:02:26.049140 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5\": container with ID starting with b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5 not found: ID does not exist" containerID="b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5" Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.049176 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5"} err="failed to get container status \"b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5\": rpc error: code = NotFound desc = could not find container \"b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5\": container with ID starting with b942aefa22a763332a50425b7b29b17d7968ed38f6b4a565c993a48e5962b6d5 not found: ID does not exist" Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.049199 4712 scope.go:117] "RemoveContainer" containerID="532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1" Jan 30 20:02:26 crc kubenswrapper[4712]: E0130 20:02:26.049404 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1\": container with ID starting with 532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1 not found: ID does not exist" containerID="532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1" Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.049426 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1"} err="failed to get container status \"532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1\": rpc error: code = NotFound desc = could not find container \"532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1\": container with ID starting with 532d53e17af5736be39929e3f6424c6abb10ae1df40b766849120be705a220f1 not found: ID does not exist" Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.049438 4712 scope.go:117] "RemoveContainer" containerID="e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e" Jan 30 20:02:26 crc kubenswrapper[4712]: E0130 20:02:26.049730 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e\": container with ID starting with e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e not found: ID does not exist" containerID="e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e" Jan 30 20:02:26 crc kubenswrapper[4712]: I0130 20:02:26.049746 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e"} err="failed to get container status \"e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e\": rpc error: code = NotFound desc = could not find container \"e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e\": container with ID starting with e0a9f8673b9eb6502f5b11c881fa80948e51ee2f62356cd3754c1071b689b09e not found: ID does not exist" Jan 30 20:02:27 crc kubenswrapper[4712]: I0130 20:02:27.816289 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" path="/var/lib/kubelet/pods/e0bd1972-47be-4e29-9c06-aab6f3f6a425/volumes" Jan 30 20:02:36 crc kubenswrapper[4712]: I0130 20:02:36.271263 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:02:36 crc kubenswrapper[4712]: I0130 20:02:36.271882 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:02:36 crc kubenswrapper[4712]: I0130 20:02:36.271947 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 20:02:36 crc kubenswrapper[4712]: I0130 20:02:36.272879 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 20:02:36 crc kubenswrapper[4712]: I0130 20:02:36.272949 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" gracePeriod=600 Jan 30 20:02:36 crc kubenswrapper[4712]: E0130 20:02:36.397017 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:02:37 crc kubenswrapper[4712]: I0130 20:02:37.064278 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" exitCode=0 Jan 30 20:02:37 crc kubenswrapper[4712]: I0130 20:02:37.064375 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703"} Jan 30 20:02:37 crc kubenswrapper[4712]: I0130 20:02:37.064733 4712 scope.go:117] "RemoveContainer" containerID="3663632c4504a46d447b749d0bd94c2be7d4c5fa6615c22f15407718f34a371f" Jan 30 20:02:37 crc kubenswrapper[4712]: I0130 20:02:37.065513 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:02:37 crc kubenswrapper[4712]: E0130 20:02:37.068408 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:02:51 crc kubenswrapper[4712]: I0130 20:02:51.800388 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:02:51 crc kubenswrapper[4712]: E0130 20:02:51.801096 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:03:06 crc kubenswrapper[4712]: I0130 20:03:06.799759 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:03:06 crc kubenswrapper[4712]: E0130 20:03:06.800603 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:03:20 crc kubenswrapper[4712]: I0130 20:03:20.799694 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:03:20 crc kubenswrapper[4712]: E0130 20:03:20.800557 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:03:34 crc kubenswrapper[4712]: I0130 20:03:34.800368 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:03:34 crc kubenswrapper[4712]: E0130 20:03:34.801680 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:03:47 crc kubenswrapper[4712]: I0130 20:03:47.799493 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:03:47 crc kubenswrapper[4712]: E0130 20:03:47.800545 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.009028 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m86vp"] Jan 30 20:03:55 crc kubenswrapper[4712]: E0130 20:03:55.009952 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="extract-utilities" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.009969 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="extract-utilities" Jan 30 20:03:55 crc kubenswrapper[4712]: E0130 20:03:55.010009 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="registry-server" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.010018 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="registry-server" Jan 30 20:03:55 crc kubenswrapper[4712]: E0130 20:03:55.010043 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="extract-content" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.010051 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="extract-content" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.010268 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bd1972-47be-4e29-9c06-aab6f3f6a425" containerName="registry-server" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.011893 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.030523 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m86vp"] Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.181512 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-utilities\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.181588 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-catalog-content\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.181639 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzlnj\" (UniqueName: \"kubernetes.io/projected/5f9ceb80-07cc-4de8-9d17-4d7465e79973-kube-api-access-wzlnj\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.283322 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-utilities\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.283394 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-catalog-content\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.283437 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzlnj\" (UniqueName: \"kubernetes.io/projected/5f9ceb80-07cc-4de8-9d17-4d7465e79973-kube-api-access-wzlnj\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.284251 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-utilities\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.284522 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-catalog-content\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.320838 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzlnj\" (UniqueName: \"kubernetes.io/projected/5f9ceb80-07cc-4de8-9d17-4d7465e79973-kube-api-access-wzlnj\") pod \"community-operators-m86vp\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.330132 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.714865 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m86vp"] Jan 30 20:03:55 crc kubenswrapper[4712]: I0130 20:03:55.843992 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerStarted","Data":"0803f2e102a3f521b446cde181a5402584d6413947b6e2e47f49b857da7bf515"} Jan 30 20:03:56 crc kubenswrapper[4712]: I0130 20:03:56.856352 4712 generic.go:334] "Generic (PLEG): container finished" podID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerID="1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2" exitCode=0 Jan 30 20:03:56 crc kubenswrapper[4712]: I0130 20:03:56.856408 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerDied","Data":"1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2"} Jan 30 20:03:58 crc kubenswrapper[4712]: I0130 20:03:58.799816 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:03:58 crc kubenswrapper[4712]: E0130 20:03:58.801377 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:03:58 crc kubenswrapper[4712]: I0130 20:03:58.904047 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerStarted","Data":"4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad"} Jan 30 20:03:59 crc kubenswrapper[4712]: I0130 20:03:59.917223 4712 generic.go:334] "Generic (PLEG): container finished" podID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerID="4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad" exitCode=0 Jan 30 20:03:59 crc kubenswrapper[4712]: I0130 20:03:59.917286 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerDied","Data":"4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad"} Jan 30 20:04:00 crc kubenswrapper[4712]: I0130 20:04:00.933283 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerStarted","Data":"43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d"} Jan 30 20:04:00 crc kubenswrapper[4712]: I0130 20:04:00.973181 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m86vp" podStartSLOduration=3.497169552 podStartE2EDuration="6.973152416s" podCreationTimestamp="2026-01-30 20:03:54 +0000 UTC" firstStartedPulling="2026-01-30 20:03:56.858449396 +0000 UTC m=+11373.765458915" lastFinishedPulling="2026-01-30 20:04:00.33443231 +0000 UTC m=+11377.241441779" observedRunningTime="2026-01-30 20:04:00.957894348 +0000 UTC m=+11377.864903887" watchObservedRunningTime="2026-01-30 20:04:00.973152416 +0000 UTC m=+11377.880161915" Jan 30 20:04:05 crc kubenswrapper[4712]: I0130 20:04:05.330945 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:04:05 crc kubenswrapper[4712]: I0130 20:04:05.332539 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:04:06 crc kubenswrapper[4712]: I0130 20:04:06.376958 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-m86vp" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="registry-server" probeResult="failure" output=< Jan 30 20:04:06 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:04:06 crc kubenswrapper[4712]: > Jan 30 20:04:09 crc kubenswrapper[4712]: I0130 20:04:09.800120 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:04:09 crc kubenswrapper[4712]: E0130 20:04:09.801092 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:04:15 crc kubenswrapper[4712]: I0130 20:04:15.390249 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:04:15 crc kubenswrapper[4712]: I0130 20:04:15.459475 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:04:15 crc kubenswrapper[4712]: I0130 20:04:15.641888 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m86vp"] Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.120613 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m86vp" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="registry-server" containerID="cri-o://43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d" gracePeriod=2 Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.635581 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.767234 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-catalog-content\") pod \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.767285 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-utilities\") pod \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.767433 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzlnj\" (UniqueName: \"kubernetes.io/projected/5f9ceb80-07cc-4de8-9d17-4d7465e79973-kube-api-access-wzlnj\") pod \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\" (UID: \"5f9ceb80-07cc-4de8-9d17-4d7465e79973\") " Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.770001 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-utilities" (OuterVolumeSpecName: "utilities") pod "5f9ceb80-07cc-4de8-9d17-4d7465e79973" (UID: "5f9ceb80-07cc-4de8-9d17-4d7465e79973"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.773556 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f9ceb80-07cc-4de8-9d17-4d7465e79973-kube-api-access-wzlnj" (OuterVolumeSpecName: "kube-api-access-wzlnj") pod "5f9ceb80-07cc-4de8-9d17-4d7465e79973" (UID: "5f9ceb80-07cc-4de8-9d17-4d7465e79973"). InnerVolumeSpecName "kube-api-access-wzlnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.820683 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f9ceb80-07cc-4de8-9d17-4d7465e79973" (UID: "5f9ceb80-07cc-4de8-9d17-4d7465e79973"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.871016 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.872109 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f9ceb80-07cc-4de8-9d17-4d7465e79973-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:04:17 crc kubenswrapper[4712]: I0130 20:04:17.877002 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzlnj\" (UniqueName: \"kubernetes.io/projected/5f9ceb80-07cc-4de8-9d17-4d7465e79973-kube-api-access-wzlnj\") on node \"crc\" DevicePath \"\"" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.136267 4712 generic.go:334] "Generic (PLEG): container finished" podID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerID="43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d" exitCode=0 Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.136334 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerDied","Data":"43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d"} Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.136365 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m86vp" event={"ID":"5f9ceb80-07cc-4de8-9d17-4d7465e79973","Type":"ContainerDied","Data":"0803f2e102a3f521b446cde181a5402584d6413947b6e2e47f49b857da7bf515"} Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.136384 4712 scope.go:117] "RemoveContainer" containerID="43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.136481 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m86vp" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.167564 4712 scope.go:117] "RemoveContainer" containerID="4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.202583 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m86vp"] Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.211045 4712 scope.go:117] "RemoveContainer" containerID="1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.212605 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m86vp"] Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.273167 4712 scope.go:117] "RemoveContainer" containerID="43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d" Jan 30 20:04:18 crc kubenswrapper[4712]: E0130 20:04:18.273815 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d\": container with ID starting with 43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d not found: ID does not exist" containerID="43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.273884 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d"} err="failed to get container status \"43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d\": rpc error: code = NotFound desc = could not find container \"43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d\": container with ID starting with 43a689e872845b940f8b9d25ef7fb0836c204c78a3e9b4a4f9d11af3b0f02c7d not found: ID does not exist" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.273924 4712 scope.go:117] "RemoveContainer" containerID="4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad" Jan 30 20:04:18 crc kubenswrapper[4712]: E0130 20:04:18.274407 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad\": container with ID starting with 4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad not found: ID does not exist" containerID="4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.274436 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad"} err="failed to get container status \"4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad\": rpc error: code = NotFound desc = could not find container \"4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad\": container with ID starting with 4b9b3dc33801417465c9e5bf0fde5288783b7fc8bf2811a64b19d49f3d2614ad not found: ID does not exist" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.274455 4712 scope.go:117] "RemoveContainer" containerID="1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2" Jan 30 20:04:18 crc kubenswrapper[4712]: E0130 20:04:18.274837 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2\": container with ID starting with 1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2 not found: ID does not exist" containerID="1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2" Jan 30 20:04:18 crc kubenswrapper[4712]: I0130 20:04:18.274860 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2"} err="failed to get container status \"1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2\": rpc error: code = NotFound desc = could not find container \"1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2\": container with ID starting with 1be7217356ca51bf9c33b32fc048cd5c6dbb8987bef250fd5f66d5b423e50dd2 not found: ID does not exist" Jan 30 20:04:19 crc kubenswrapper[4712]: I0130 20:04:19.813195 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" path="/var/lib/kubelet/pods/5f9ceb80-07cc-4de8-9d17-4d7465e79973/volumes" Jan 30 20:04:24 crc kubenswrapper[4712]: I0130 20:04:24.799573 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:04:24 crc kubenswrapper[4712]: E0130 20:04:24.800388 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:04:39 crc kubenswrapper[4712]: I0130 20:04:39.804697 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:04:39 crc kubenswrapper[4712]: E0130 20:04:39.805929 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.110462 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-v6cfp"] Jan 30 20:04:50 crc kubenswrapper[4712]: E0130 20:04:50.111445 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="extract-utilities" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.111462 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="extract-utilities" Jan 30 20:04:50 crc kubenswrapper[4712]: E0130 20:04:50.111495 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="registry-server" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.111503 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="registry-server" Jan 30 20:04:50 crc kubenswrapper[4712]: E0130 20:04:50.111543 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="extract-content" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.111551 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="extract-content" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.111775 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f9ceb80-07cc-4de8-9d17-4d7465e79973" containerName="registry-server" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.113507 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.121089 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6cfp"] Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.217743 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-utilities\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.217874 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-catalog-content\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.217901 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7l882\" (UniqueName: \"kubernetes.io/projected/36d7f660-fd8e-48cd-996b-a342cabbbc81-kube-api-access-7l882\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.319488 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-utilities\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.319633 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-catalog-content\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.319659 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7l882\" (UniqueName: \"kubernetes.io/projected/36d7f660-fd8e-48cd-996b-a342cabbbc81-kube-api-access-7l882\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.320042 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-utilities\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.320077 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-catalog-content\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.337471 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7l882\" (UniqueName: \"kubernetes.io/projected/36d7f660-fd8e-48cd-996b-a342cabbbc81-kube-api-access-7l882\") pod \"redhat-operators-v6cfp\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.446681 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.800236 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:04:50 crc kubenswrapper[4712]: E0130 20:04:50.800898 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:04:50 crc kubenswrapper[4712]: I0130 20:04:50.946437 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-v6cfp"] Jan 30 20:04:51 crc kubenswrapper[4712]: I0130 20:04:51.466598 4712 generic.go:334] "Generic (PLEG): container finished" podID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerID="68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619" exitCode=0 Jan 30 20:04:51 crc kubenswrapper[4712]: I0130 20:04:51.466645 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerDied","Data":"68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619"} Jan 30 20:04:51 crc kubenswrapper[4712]: I0130 20:04:51.466945 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerStarted","Data":"c91085f304d3737f1b2d64c8bbb468e0ad973fa29dbd49c06cc7548261210cbc"} Jan 30 20:04:52 crc kubenswrapper[4712]: I0130 20:04:52.489564 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerStarted","Data":"731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93"} Jan 30 20:04:58 crc kubenswrapper[4712]: I0130 20:04:58.544677 4712 generic.go:334] "Generic (PLEG): container finished" podID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerID="731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93" exitCode=0 Jan 30 20:04:58 crc kubenswrapper[4712]: I0130 20:04:58.544756 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerDied","Data":"731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93"} Jan 30 20:04:59 crc kubenswrapper[4712]: I0130 20:04:59.556116 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerStarted","Data":"f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0"} Jan 30 20:04:59 crc kubenswrapper[4712]: I0130 20:04:59.574706 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-v6cfp" podStartSLOduration=2.055022598 podStartE2EDuration="9.574688699s" podCreationTimestamp="2026-01-30 20:04:50 +0000 UTC" firstStartedPulling="2026-01-30 20:04:51.468642934 +0000 UTC m=+11428.375652403" lastFinishedPulling="2026-01-30 20:04:58.988309035 +0000 UTC m=+11435.895318504" observedRunningTime="2026-01-30 20:04:59.570750894 +0000 UTC m=+11436.477760363" watchObservedRunningTime="2026-01-30 20:04:59.574688699 +0000 UTC m=+11436.481698168" Jan 30 20:05:00 crc kubenswrapper[4712]: I0130 20:05:00.447670 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:05:00 crc kubenswrapper[4712]: I0130 20:05:00.447726 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:05:01 crc kubenswrapper[4712]: I0130 20:05:01.507400 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6cfp" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" probeResult="failure" output=< Jan 30 20:05:01 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:05:01 crc kubenswrapper[4712]: > Jan 30 20:05:02 crc kubenswrapper[4712]: I0130 20:05:02.800644 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:05:02 crc kubenswrapper[4712]: E0130 20:05:02.801700 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:05:11 crc kubenswrapper[4712]: I0130 20:05:11.499460 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6cfp" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" probeResult="failure" output=< Jan 30 20:05:11 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:05:11 crc kubenswrapper[4712]: > Jan 30 20:05:15 crc kubenswrapper[4712]: I0130 20:05:15.799681 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:05:15 crc kubenswrapper[4712]: E0130 20:05:15.800338 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:05:21 crc kubenswrapper[4712]: I0130 20:05:21.496445 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-v6cfp" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" probeResult="failure" output=< Jan 30 20:05:21 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:05:21 crc kubenswrapper[4712]: > Jan 30 20:05:30 crc kubenswrapper[4712]: I0130 20:05:30.519464 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:05:30 crc kubenswrapper[4712]: I0130 20:05:30.578522 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:05:30 crc kubenswrapper[4712]: I0130 20:05:30.769233 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6cfp"] Jan 30 20:05:30 crc kubenswrapper[4712]: I0130 20:05:30.799881 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:05:30 crc kubenswrapper[4712]: E0130 20:05:30.800102 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:05:31 crc kubenswrapper[4712]: I0130 20:05:31.892291 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-v6cfp" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" containerID="cri-o://f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0" gracePeriod=2 Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.637790 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.708570 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-utilities\") pod \"36d7f660-fd8e-48cd-996b-a342cabbbc81\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.708719 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7l882\" (UniqueName: \"kubernetes.io/projected/36d7f660-fd8e-48cd-996b-a342cabbbc81-kube-api-access-7l882\") pod \"36d7f660-fd8e-48cd-996b-a342cabbbc81\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.708832 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-catalog-content\") pod \"36d7f660-fd8e-48cd-996b-a342cabbbc81\" (UID: \"36d7f660-fd8e-48cd-996b-a342cabbbc81\") " Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.709846 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-utilities" (OuterVolumeSpecName: "utilities") pod "36d7f660-fd8e-48cd-996b-a342cabbbc81" (UID: "36d7f660-fd8e-48cd-996b-a342cabbbc81"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.746039 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36d7f660-fd8e-48cd-996b-a342cabbbc81-kube-api-access-7l882" (OuterVolumeSpecName: "kube-api-access-7l882") pod "36d7f660-fd8e-48cd-996b-a342cabbbc81" (UID: "36d7f660-fd8e-48cd-996b-a342cabbbc81"). InnerVolumeSpecName "kube-api-access-7l882". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.812403 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.812443 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7l882\" (UniqueName: \"kubernetes.io/projected/36d7f660-fd8e-48cd-996b-a342cabbbc81-kube-api-access-7l882\") on node \"crc\" DevicePath \"\"" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.829532 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36d7f660-fd8e-48cd-996b-a342cabbbc81" (UID: "36d7f660-fd8e-48cd-996b-a342cabbbc81"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.902341 4712 generic.go:334] "Generic (PLEG): container finished" podID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerID="f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0" exitCode=0 Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.902379 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerDied","Data":"f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0"} Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.902428 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-v6cfp" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.902452 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-v6cfp" event={"ID":"36d7f660-fd8e-48cd-996b-a342cabbbc81","Type":"ContainerDied","Data":"c91085f304d3737f1b2d64c8bbb468e0ad973fa29dbd49c06cc7548261210cbc"} Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.902470 4712 scope.go:117] "RemoveContainer" containerID="f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.914491 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36d7f660-fd8e-48cd-996b-a342cabbbc81-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.930287 4712 scope.go:117] "RemoveContainer" containerID="731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.943147 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-v6cfp"] Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.951820 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-v6cfp"] Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.961173 4712 scope.go:117] "RemoveContainer" containerID="68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.997847 4712 scope.go:117] "RemoveContainer" containerID="f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0" Jan 30 20:05:32 crc kubenswrapper[4712]: E0130 20:05:32.998385 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0\": container with ID starting with f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0 not found: ID does not exist" containerID="f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.998422 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0"} err="failed to get container status \"f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0\": rpc error: code = NotFound desc = could not find container \"f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0\": container with ID starting with f62d6e2f9254c29744e6538657846ffb4912757a07b63d39144eee096ac280f0 not found: ID does not exist" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.998465 4712 scope.go:117] "RemoveContainer" containerID="731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93" Jan 30 20:05:32 crc kubenswrapper[4712]: E0130 20:05:32.999152 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93\": container with ID starting with 731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93 not found: ID does not exist" containerID="731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.999183 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93"} err="failed to get container status \"731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93\": rpc error: code = NotFound desc = could not find container \"731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93\": container with ID starting with 731bda36646dbe61e1f985bd5aed5c11e4c9b03264d939ee603f1059f9e5ed93 not found: ID does not exist" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.999205 4712 scope.go:117] "RemoveContainer" containerID="68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619" Jan 30 20:05:32 crc kubenswrapper[4712]: E0130 20:05:32.999560 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619\": container with ID starting with 68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619 not found: ID does not exist" containerID="68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619" Jan 30 20:05:32 crc kubenswrapper[4712]: I0130 20:05:32.999602 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619"} err="failed to get container status \"68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619\": rpc error: code = NotFound desc = could not find container \"68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619\": container with ID starting with 68e5f3d989aa0215e236121f929700f937301d4253c2de2a6d0a85dc63229619 not found: ID does not exist" Jan 30 20:05:33 crc kubenswrapper[4712]: I0130 20:05:33.815837 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" path="/var/lib/kubelet/pods/36d7f660-fd8e-48cd-996b-a342cabbbc81/volumes" Jan 30 20:05:42 crc kubenswrapper[4712]: I0130 20:05:42.799792 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:05:42 crc kubenswrapper[4712]: E0130 20:05:42.800573 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:05:54 crc kubenswrapper[4712]: I0130 20:05:54.799698 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:05:54 crc kubenswrapper[4712]: E0130 20:05:54.800456 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:06:08 crc kubenswrapper[4712]: I0130 20:06:08.800070 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:06:08 crc kubenswrapper[4712]: E0130 20:06:08.800852 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:06:20 crc kubenswrapper[4712]: I0130 20:06:20.801598 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:06:20 crc kubenswrapper[4712]: E0130 20:06:20.802577 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:06:33 crc kubenswrapper[4712]: I0130 20:06:33.813484 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:06:33 crc kubenswrapper[4712]: E0130 20:06:33.814504 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:06:47 crc kubenswrapper[4712]: I0130 20:06:47.802054 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:06:47 crc kubenswrapper[4712]: E0130 20:06:47.805111 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:07:00 crc kubenswrapper[4712]: I0130 20:07:00.799990 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:07:00 crc kubenswrapper[4712]: E0130 20:07:00.800815 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:07:12 crc kubenswrapper[4712]: I0130 20:07:12.800513 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:07:12 crc kubenswrapper[4712]: E0130 20:07:12.801556 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:07:27 crc kubenswrapper[4712]: I0130 20:07:27.799662 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:07:27 crc kubenswrapper[4712]: E0130 20:07:27.800744 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:07:38 crc kubenswrapper[4712]: I0130 20:07:38.800337 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:07:39 crc kubenswrapper[4712]: I0130 20:07:39.294591 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"78b6694846e11b3d8860e9c02e889ee1d58b54f61c58cc67345f12a9a0677642"} Jan 30 20:10:06 crc kubenswrapper[4712]: I0130 20:10:06.271284 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:10:06 crc kubenswrapper[4712]: I0130 20:10:06.271900 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:10:36 crc kubenswrapper[4712]: I0130 20:10:36.271979 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:10:36 crc kubenswrapper[4712]: I0130 20:10:36.272606 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.271133 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.271923 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.272005 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.273135 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"78b6694846e11b3d8860e9c02e889ee1d58b54f61c58cc67345f12a9a0677642"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.273237 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://78b6694846e11b3d8860e9c02e889ee1d58b54f61c58cc67345f12a9a0677642" gracePeriod=600 Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.450204 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="78b6694846e11b3d8860e9c02e889ee1d58b54f61c58cc67345f12a9a0677642" exitCode=0 Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.450470 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"78b6694846e11b3d8860e9c02e889ee1d58b54f61c58cc67345f12a9a0677642"} Jan 30 20:11:06 crc kubenswrapper[4712]: I0130 20:11:06.450558 4712 scope.go:117] "RemoveContainer" containerID="f3d81f3d5b996ae8a398e628296472a6a6a7d3a239567e93de923ba9c60d1703" Jan 30 20:11:07 crc kubenswrapper[4712]: I0130 20:11:07.467558 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff"} Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.299640 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mpfdc"] Jan 30 20:12:30 crc kubenswrapper[4712]: E0130 20:12:30.301448 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="extract-content" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.301471 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="extract-content" Jan 30 20:12:30 crc kubenswrapper[4712]: E0130 20:12:30.301503 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="extract-utilities" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.301511 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="extract-utilities" Jan 30 20:12:30 crc kubenswrapper[4712]: E0130 20:12:30.301533 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.301543 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.301983 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="36d7f660-fd8e-48cd-996b-a342cabbbc81" containerName="registry-server" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.304471 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.319844 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpfdc"] Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.415181 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-utilities\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.415266 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-catalog-content\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.415336 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxhsd\" (UniqueName: \"kubernetes.io/projected/745dd161-a511-4cd0-90c9-4cb90949832c-kube-api-access-qxhsd\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.516968 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-utilities\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.517048 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-catalog-content\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.517111 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qxhsd\" (UniqueName: \"kubernetes.io/projected/745dd161-a511-4cd0-90c9-4cb90949832c-kube-api-access-qxhsd\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.517886 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-utilities\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.518003 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-catalog-content\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.549268 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qxhsd\" (UniqueName: \"kubernetes.io/projected/745dd161-a511-4cd0-90c9-4cb90949832c-kube-api-access-qxhsd\") pod \"redhat-marketplace-mpfdc\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:30 crc kubenswrapper[4712]: I0130 20:12:30.626992 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:31 crc kubenswrapper[4712]: I0130 20:12:31.272291 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpfdc"] Jan 30 20:12:31 crc kubenswrapper[4712]: I0130 20:12:31.353465 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerStarted","Data":"3c2c0a00cfe3b62bd5c41c3a062ea6e7de114ef3fe03f8726ec92a892bf7b471"} Jan 30 20:12:32 crc kubenswrapper[4712]: I0130 20:12:32.363865 4712 generic.go:334] "Generic (PLEG): container finished" podID="745dd161-a511-4cd0-90c9-4cb90949832c" containerID="d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953" exitCode=0 Jan 30 20:12:32 crc kubenswrapper[4712]: I0130 20:12:32.363919 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerDied","Data":"d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953"} Jan 30 20:12:32 crc kubenswrapper[4712]: I0130 20:12:32.366767 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 20:12:33 crc kubenswrapper[4712]: I0130 20:12:33.379526 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerStarted","Data":"f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756"} Jan 30 20:12:35 crc kubenswrapper[4712]: I0130 20:12:35.441643 4712 generic.go:334] "Generic (PLEG): container finished" podID="745dd161-a511-4cd0-90c9-4cb90949832c" containerID="f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756" exitCode=0 Jan 30 20:12:35 crc kubenswrapper[4712]: I0130 20:12:35.442012 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerDied","Data":"f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756"} Jan 30 20:12:36 crc kubenswrapper[4712]: I0130 20:12:36.456611 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerStarted","Data":"4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40"} Jan 30 20:12:36 crc kubenswrapper[4712]: I0130 20:12:36.483007 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mpfdc" podStartSLOduration=2.949714874 podStartE2EDuration="6.482988579s" podCreationTimestamp="2026-01-30 20:12:30 +0000 UTC" firstStartedPulling="2026-01-30 20:12:32.366315012 +0000 UTC m=+11889.273324481" lastFinishedPulling="2026-01-30 20:12:35.899588697 +0000 UTC m=+11892.806598186" observedRunningTime="2026-01-30 20:12:36.477916016 +0000 UTC m=+11893.384925525" watchObservedRunningTime="2026-01-30 20:12:36.482988579 +0000 UTC m=+11893.389998058" Jan 30 20:12:40 crc kubenswrapper[4712]: I0130 20:12:40.627389 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:40 crc kubenswrapper[4712]: I0130 20:12:40.628066 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:41 crc kubenswrapper[4712]: I0130 20:12:41.698727 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-mpfdc" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="registry-server" probeResult="failure" output=< Jan 30 20:12:41 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:12:41 crc kubenswrapper[4712]: > Jan 30 20:12:50 crc kubenswrapper[4712]: I0130 20:12:50.730942 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:50 crc kubenswrapper[4712]: I0130 20:12:50.816936 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:50 crc kubenswrapper[4712]: I0130 20:12:50.990975 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpfdc"] Jan 30 20:12:52 crc kubenswrapper[4712]: I0130 20:12:52.607256 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mpfdc" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="registry-server" containerID="cri-o://4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40" gracePeriod=2 Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.164652 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.291754 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxhsd\" (UniqueName: \"kubernetes.io/projected/745dd161-a511-4cd0-90c9-4cb90949832c-kube-api-access-qxhsd\") pod \"745dd161-a511-4cd0-90c9-4cb90949832c\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.292068 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-utilities\") pod \"745dd161-a511-4cd0-90c9-4cb90949832c\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.292128 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-catalog-content\") pod \"745dd161-a511-4cd0-90c9-4cb90949832c\" (UID: \"745dd161-a511-4cd0-90c9-4cb90949832c\") " Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.292734 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-utilities" (OuterVolumeSpecName: "utilities") pod "745dd161-a511-4cd0-90c9-4cb90949832c" (UID: "745dd161-a511-4cd0-90c9-4cb90949832c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.296285 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.303679 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/745dd161-a511-4cd0-90c9-4cb90949832c-kube-api-access-qxhsd" (OuterVolumeSpecName: "kube-api-access-qxhsd") pod "745dd161-a511-4cd0-90c9-4cb90949832c" (UID: "745dd161-a511-4cd0-90c9-4cb90949832c"). InnerVolumeSpecName "kube-api-access-qxhsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.319372 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "745dd161-a511-4cd0-90c9-4cb90949832c" (UID: "745dd161-a511-4cd0-90c9-4cb90949832c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.398337 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/745dd161-a511-4cd0-90c9-4cb90949832c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.398375 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qxhsd\" (UniqueName: \"kubernetes.io/projected/745dd161-a511-4cd0-90c9-4cb90949832c-kube-api-access-qxhsd\") on node \"crc\" DevicePath \"\"" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.620123 4712 generic.go:334] "Generic (PLEG): container finished" podID="745dd161-a511-4cd0-90c9-4cb90949832c" containerID="4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40" exitCode=0 Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.620177 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerDied","Data":"4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40"} Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.620222 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mpfdc" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.620255 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mpfdc" event={"ID":"745dd161-a511-4cd0-90c9-4cb90949832c","Type":"ContainerDied","Data":"3c2c0a00cfe3b62bd5c41c3a062ea6e7de114ef3fe03f8726ec92a892bf7b471"} Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.620287 4712 scope.go:117] "RemoveContainer" containerID="4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.668083 4712 scope.go:117] "RemoveContainer" containerID="f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.674398 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpfdc"] Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.685213 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mpfdc"] Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.686724 4712 scope.go:117] "RemoveContainer" containerID="d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.788315 4712 scope.go:117] "RemoveContainer" containerID="4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40" Jan 30 20:12:53 crc kubenswrapper[4712]: E0130 20:12:53.788785 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40\": container with ID starting with 4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40 not found: ID does not exist" containerID="4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.788875 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40"} err="failed to get container status \"4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40\": rpc error: code = NotFound desc = could not find container \"4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40\": container with ID starting with 4172cff8bc221e4491d0e21274cba7d4288de3a3ba6651993bac2100d4ff6e40 not found: ID does not exist" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.788905 4712 scope.go:117] "RemoveContainer" containerID="f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756" Jan 30 20:12:53 crc kubenswrapper[4712]: E0130 20:12:53.789362 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756\": container with ID starting with f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756 not found: ID does not exist" containerID="f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.789411 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756"} err="failed to get container status \"f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756\": rpc error: code = NotFound desc = could not find container \"f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756\": container with ID starting with f386d061c1b1403f06fb338ba4dacd520af5e009221d65db3a1bd6e1f32ec756 not found: ID does not exist" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.789434 4712 scope.go:117] "RemoveContainer" containerID="d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953" Jan 30 20:12:53 crc kubenswrapper[4712]: E0130 20:12:53.789918 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953\": container with ID starting with d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953 not found: ID does not exist" containerID="d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.789939 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953"} err="failed to get container status \"d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953\": rpc error: code = NotFound desc = could not find container \"d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953\": container with ID starting with d3ad2ff3ceef06cb424f07056214ff7005c6afc4590d0409567491fac7f96953 not found: ID does not exist" Jan 30 20:12:53 crc kubenswrapper[4712]: I0130 20:12:53.814769 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" path="/var/lib/kubelet/pods/745dd161-a511-4cd0-90c9-4cb90949832c/volumes" Jan 30 20:13:06 crc kubenswrapper[4712]: I0130 20:13:06.271704 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:13:06 crc kubenswrapper[4712]: I0130 20:13:06.272360 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:13:36 crc kubenswrapper[4712]: I0130 20:13:36.271070 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:13:36 crc kubenswrapper[4712]: I0130 20:13:36.271692 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:14:06 crc kubenswrapper[4712]: I0130 20:14:06.271545 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:14:06 crc kubenswrapper[4712]: I0130 20:14:06.272541 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:14:06 crc kubenswrapper[4712]: I0130 20:14:06.272623 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 20:14:06 crc kubenswrapper[4712]: I0130 20:14:06.273914 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 20:14:06 crc kubenswrapper[4712]: I0130 20:14:06.274023 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" gracePeriod=600 Jan 30 20:14:06 crc kubenswrapper[4712]: E0130 20:14:06.405922 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:14:07 crc kubenswrapper[4712]: I0130 20:14:07.409410 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" exitCode=0 Jan 30 20:14:07 crc kubenswrapper[4712]: I0130 20:14:07.409461 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff"} Jan 30 20:14:07 crc kubenswrapper[4712]: I0130 20:14:07.409500 4712 scope.go:117] "RemoveContainer" containerID="78b6694846e11b3d8860e9c02e889ee1d58b54f61c58cc67345f12a9a0677642" Jan 30 20:14:07 crc kubenswrapper[4712]: I0130 20:14:07.411193 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:14:07 crc kubenswrapper[4712]: E0130 20:14:07.411692 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:14:22 crc kubenswrapper[4712]: I0130 20:14:22.800405 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:14:22 crc kubenswrapper[4712]: E0130 20:14:22.801639 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.956255 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hj4nx"] Jan 30 20:14:26 crc kubenswrapper[4712]: E0130 20:14:26.957516 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="registry-server" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.957540 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="registry-server" Jan 30 20:14:26 crc kubenswrapper[4712]: E0130 20:14:26.957569 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="extract-utilities" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.957578 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="extract-utilities" Jan 30 20:14:26 crc kubenswrapper[4712]: E0130 20:14:26.957614 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="extract-content" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.957624 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="extract-content" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.957894 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="745dd161-a511-4cd0-90c9-4cb90949832c" containerName="registry-server" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.963429 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:26 crc kubenswrapper[4712]: I0130 20:14:26.972388 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4nx"] Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.109032 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwpdl\" (UniqueName: \"kubernetes.io/projected/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-kube-api-access-gwpdl\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.109131 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-utilities\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.109344 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-catalog-content\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.211590 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwpdl\" (UniqueName: \"kubernetes.io/projected/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-kube-api-access-gwpdl\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.211699 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-utilities\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.211814 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-catalog-content\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.212294 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-utilities\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.212389 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-catalog-content\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.234544 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwpdl\" (UniqueName: \"kubernetes.io/projected/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-kube-api-access-gwpdl\") pod \"community-operators-hj4nx\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:27 crc kubenswrapper[4712]: I0130 20:14:27.284913 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:28 crc kubenswrapper[4712]: I0130 20:14:28.028971 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hj4nx"] Jan 30 20:14:28 crc kubenswrapper[4712]: I0130 20:14:28.699997 4712 generic.go:334] "Generic (PLEG): container finished" podID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerID="3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b" exitCode=0 Jan 30 20:14:28 crc kubenswrapper[4712]: I0130 20:14:28.700261 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerDied","Data":"3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b"} Jan 30 20:14:28 crc kubenswrapper[4712]: I0130 20:14:28.700987 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerStarted","Data":"4716a857201b9e457b4bd901a54355110744ea133dfde5a2139bae03264a0b55"} Jan 30 20:14:29 crc kubenswrapper[4712]: I0130 20:14:29.711482 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerStarted","Data":"efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc"} Jan 30 20:14:31 crc kubenswrapper[4712]: I0130 20:14:31.732402 4712 generic.go:334] "Generic (PLEG): container finished" podID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerID="efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc" exitCode=0 Jan 30 20:14:31 crc kubenswrapper[4712]: I0130 20:14:31.732503 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerDied","Data":"efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc"} Jan 30 20:14:32 crc kubenswrapper[4712]: I0130 20:14:32.744818 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerStarted","Data":"90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd"} Jan 30 20:14:32 crc kubenswrapper[4712]: I0130 20:14:32.774185 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hj4nx" podStartSLOduration=3.348959422 podStartE2EDuration="6.77416058s" podCreationTimestamp="2026-01-30 20:14:26 +0000 UTC" firstStartedPulling="2026-01-30 20:14:28.703621114 +0000 UTC m=+12005.610630583" lastFinishedPulling="2026-01-30 20:14:32.128822262 +0000 UTC m=+12009.035831741" observedRunningTime="2026-01-30 20:14:32.767776525 +0000 UTC m=+12009.674785994" watchObservedRunningTime="2026-01-30 20:14:32.77416058 +0000 UTC m=+12009.681170049" Jan 30 20:14:34 crc kubenswrapper[4712]: I0130 20:14:34.799582 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:14:34 crc kubenswrapper[4712]: E0130 20:14:34.800155 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:14:37 crc kubenswrapper[4712]: I0130 20:14:37.285565 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:37 crc kubenswrapper[4712]: I0130 20:14:37.287019 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:38 crc kubenswrapper[4712]: I0130 20:14:38.345883 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-hj4nx" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="registry-server" probeResult="failure" output=< Jan 30 20:14:38 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:14:38 crc kubenswrapper[4712]: > Jan 30 20:14:47 crc kubenswrapper[4712]: I0130 20:14:47.332187 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:47 crc kubenswrapper[4712]: I0130 20:14:47.396659 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:47 crc kubenswrapper[4712]: I0130 20:14:47.590248 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hj4nx"] Jan 30 20:14:47 crc kubenswrapper[4712]: I0130 20:14:47.800116 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:14:47 crc kubenswrapper[4712]: E0130 20:14:47.800515 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:14:48 crc kubenswrapper[4712]: I0130 20:14:48.898863 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hj4nx" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="registry-server" containerID="cri-o://90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd" gracePeriod=2 Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.638476 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.787566 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-utilities\") pod \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.787655 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-catalog-content\") pod \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.787895 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwpdl\" (UniqueName: \"kubernetes.io/projected/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-kube-api-access-gwpdl\") pod \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\" (UID: \"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4\") " Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.788201 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-utilities" (OuterVolumeSpecName: "utilities") pod "4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" (UID: "4ac2f47f-c81c-4c97-b8b2-88132d01c8b4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.788719 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.813228 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-kube-api-access-gwpdl" (OuterVolumeSpecName: "kube-api-access-gwpdl") pod "4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" (UID: "4ac2f47f-c81c-4c97-b8b2-88132d01c8b4"). InnerVolumeSpecName "kube-api-access-gwpdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.839752 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" (UID: "4ac2f47f-c81c-4c97-b8b2-88132d01c8b4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.891044 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.891076 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwpdl\" (UniqueName: \"kubernetes.io/projected/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4-kube-api-access-gwpdl\") on node \"crc\" DevicePath \"\"" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.910591 4712 generic.go:334] "Generic (PLEG): container finished" podID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerID="90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd" exitCode=0 Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.910639 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerDied","Data":"90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd"} Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.910683 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hj4nx" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.910715 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hj4nx" event={"ID":"4ac2f47f-c81c-4c97-b8b2-88132d01c8b4","Type":"ContainerDied","Data":"4716a857201b9e457b4bd901a54355110744ea133dfde5a2139bae03264a0b55"} Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.910735 4712 scope.go:117] "RemoveContainer" containerID="90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.947832 4712 scope.go:117] "RemoveContainer" containerID="efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc" Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.955412 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hj4nx"] Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.981577 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hj4nx"] Jan 30 20:14:49 crc kubenswrapper[4712]: I0130 20:14:49.983062 4712 scope.go:117] "RemoveContainer" containerID="3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b" Jan 30 20:14:50 crc kubenswrapper[4712]: I0130 20:14:50.025169 4712 scope.go:117] "RemoveContainer" containerID="90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd" Jan 30 20:14:50 crc kubenswrapper[4712]: E0130 20:14:50.025614 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd\": container with ID starting with 90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd not found: ID does not exist" containerID="90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd" Jan 30 20:14:50 crc kubenswrapper[4712]: I0130 20:14:50.025642 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd"} err="failed to get container status \"90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd\": rpc error: code = NotFound desc = could not find container \"90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd\": container with ID starting with 90238404e00a80184e8eeb8734dc66fe073130039358327e146c363324897bbd not found: ID does not exist" Jan 30 20:14:50 crc kubenswrapper[4712]: I0130 20:14:50.025661 4712 scope.go:117] "RemoveContainer" containerID="efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc" Jan 30 20:14:50 crc kubenswrapper[4712]: E0130 20:14:50.026299 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc\": container with ID starting with efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc not found: ID does not exist" containerID="efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc" Jan 30 20:14:50 crc kubenswrapper[4712]: I0130 20:14:50.026560 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc"} err="failed to get container status \"efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc\": rpc error: code = NotFound desc = could not find container \"efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc\": container with ID starting with efefe16036294598393c7d8279a2d3c6edaed74522441c056b3e4e66ca2795fc not found: ID does not exist" Jan 30 20:14:50 crc kubenswrapper[4712]: I0130 20:14:50.026574 4712 scope.go:117] "RemoveContainer" containerID="3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b" Jan 30 20:14:50 crc kubenswrapper[4712]: E0130 20:14:50.026825 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b\": container with ID starting with 3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b not found: ID does not exist" containerID="3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b" Jan 30 20:14:50 crc kubenswrapper[4712]: I0130 20:14:50.026848 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b"} err="failed to get container status \"3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b\": rpc error: code = NotFound desc = could not find container \"3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b\": container with ID starting with 3a3ad71259295c471e633fc634ff10ba55d7927b896d5c41b12a42d4ae10843b not found: ID does not exist" Jan 30 20:14:51 crc kubenswrapper[4712]: I0130 20:14:51.812932 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" path="/var/lib/kubelet/pods/4ac2f47f-c81c-4c97-b8b2-88132d01c8b4/volumes" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.194133 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t"] Jan 30 20:15:00 crc kubenswrapper[4712]: E0130 20:15:00.195189 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="extract-utilities" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.195208 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="extract-utilities" Jan 30 20:15:00 crc kubenswrapper[4712]: E0130 20:15:00.195265 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="registry-server" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.195275 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="registry-server" Jan 30 20:15:00 crc kubenswrapper[4712]: E0130 20:15:00.195294 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="extract-content" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.195302 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="extract-content" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.195515 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ac2f47f-c81c-4c97-b8b2-88132d01c8b4" containerName="registry-server" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.196357 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.217202 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t"] Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.220189 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.220205 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.298663 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/248aeced-2112-4755-a722-a88bbbb8d3f7-secret-volume\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.298743 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mb2z7\" (UniqueName: \"kubernetes.io/projected/248aeced-2112-4755-a722-a88bbbb8d3f7-kube-api-access-mb2z7\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.298825 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/248aeced-2112-4755-a722-a88bbbb8d3f7-config-volume\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.400842 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mb2z7\" (UniqueName: \"kubernetes.io/projected/248aeced-2112-4755-a722-a88bbbb8d3f7-kube-api-access-mb2z7\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.400988 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/248aeced-2112-4755-a722-a88bbbb8d3f7-config-volume\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.401241 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/248aeced-2112-4755-a722-a88bbbb8d3f7-secret-volume\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.402080 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/248aeced-2112-4755-a722-a88bbbb8d3f7-config-volume\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.410466 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/248aeced-2112-4755-a722-a88bbbb8d3f7-secret-volume\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.425683 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mb2z7\" (UniqueName: \"kubernetes.io/projected/248aeced-2112-4755-a722-a88bbbb8d3f7-kube-api-access-mb2z7\") pod \"collect-profiles-29496735-gdp6t\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.537150 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:00 crc kubenswrapper[4712]: I0130 20:15:00.800107 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:15:00 crc kubenswrapper[4712]: E0130 20:15:00.800515 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:15:01 crc kubenswrapper[4712]: I0130 20:15:01.106704 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t"] Jan 30 20:15:02 crc kubenswrapper[4712]: I0130 20:15:02.034447 4712 generic.go:334] "Generic (PLEG): container finished" podID="248aeced-2112-4755-a722-a88bbbb8d3f7" containerID="97818f39215c1a61960cb3228636c0964396612a47cfc078f1444b817d1b1d63" exitCode=0 Jan 30 20:15:02 crc kubenswrapper[4712]: I0130 20:15:02.034557 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" event={"ID":"248aeced-2112-4755-a722-a88bbbb8d3f7","Type":"ContainerDied","Data":"97818f39215c1a61960cb3228636c0964396612a47cfc078f1444b817d1b1d63"} Jan 30 20:15:02 crc kubenswrapper[4712]: I0130 20:15:02.034970 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" event={"ID":"248aeced-2112-4755-a722-a88bbbb8d3f7","Type":"ContainerStarted","Data":"7f4d6a6208dc6b3bd6736814f43c594b5f36a634b60bb4b8650bf9b60c6b0683"} Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.526072 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.673880 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/248aeced-2112-4755-a722-a88bbbb8d3f7-secret-volume\") pod \"248aeced-2112-4755-a722-a88bbbb8d3f7\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.673932 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/248aeced-2112-4755-a722-a88bbbb8d3f7-config-volume\") pod \"248aeced-2112-4755-a722-a88bbbb8d3f7\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.673954 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mb2z7\" (UniqueName: \"kubernetes.io/projected/248aeced-2112-4755-a722-a88bbbb8d3f7-kube-api-access-mb2z7\") pod \"248aeced-2112-4755-a722-a88bbbb8d3f7\" (UID: \"248aeced-2112-4755-a722-a88bbbb8d3f7\") " Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.675830 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/248aeced-2112-4755-a722-a88bbbb8d3f7-config-volume" (OuterVolumeSpecName: "config-volume") pod "248aeced-2112-4755-a722-a88bbbb8d3f7" (UID: "248aeced-2112-4755-a722-a88bbbb8d3f7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.679461 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/248aeced-2112-4755-a722-a88bbbb8d3f7-kube-api-access-mb2z7" (OuterVolumeSpecName: "kube-api-access-mb2z7") pod "248aeced-2112-4755-a722-a88bbbb8d3f7" (UID: "248aeced-2112-4755-a722-a88bbbb8d3f7"). InnerVolumeSpecName "kube-api-access-mb2z7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.680239 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/248aeced-2112-4755-a722-a88bbbb8d3f7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "248aeced-2112-4755-a722-a88bbbb8d3f7" (UID: "248aeced-2112-4755-a722-a88bbbb8d3f7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.775970 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/248aeced-2112-4755-a722-a88bbbb8d3f7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.776007 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/248aeced-2112-4755-a722-a88bbbb8d3f7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 20:15:03 crc kubenswrapper[4712]: I0130 20:15:03.776017 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mb2z7\" (UniqueName: \"kubernetes.io/projected/248aeced-2112-4755-a722-a88bbbb8d3f7-kube-api-access-mb2z7\") on node \"crc\" DevicePath \"\"" Jan 30 20:15:04 crc kubenswrapper[4712]: I0130 20:15:04.056665 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" event={"ID":"248aeced-2112-4755-a722-a88bbbb8d3f7","Type":"ContainerDied","Data":"7f4d6a6208dc6b3bd6736814f43c594b5f36a634b60bb4b8650bf9b60c6b0683"} Jan 30 20:15:04 crc kubenswrapper[4712]: I0130 20:15:04.056704 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f4d6a6208dc6b3bd6736814f43c594b5f36a634b60bb4b8650bf9b60c6b0683" Jan 30 20:15:04 crc kubenswrapper[4712]: I0130 20:15:04.056727 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496735-gdp6t" Jan 30 20:15:04 crc kubenswrapper[4712]: I0130 20:15:04.641367 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25"] Jan 30 20:15:04 crc kubenswrapper[4712]: I0130 20:15:04.649841 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496690-gfn25"] Jan 30 20:15:05 crc kubenswrapper[4712]: I0130 20:15:05.815627 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5f95fe7-59ed-4dbf-abb3-4c2b21cce321" path="/var/lib/kubelet/pods/b5f95fe7-59ed-4dbf-abb3-4c2b21cce321/volumes" Jan 30 20:15:06 crc kubenswrapper[4712]: I0130 20:15:06.588892 4712 scope.go:117] "RemoveContainer" containerID="c279049692c9ab6f762edd197070cec6f41dabe2450251f339b0706a53c0204e" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.291631 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h58tc"] Jan 30 20:15:12 crc kubenswrapper[4712]: E0130 20:15:12.293390 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="248aeced-2112-4755-a722-a88bbbb8d3f7" containerName="collect-profiles" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.293421 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="248aeced-2112-4755-a722-a88bbbb8d3f7" containerName="collect-profiles" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.293762 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="248aeced-2112-4755-a722-a88bbbb8d3f7" containerName="collect-profiles" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.305107 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h58tc"] Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.305210 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.369743 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-catalog-content\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.369869 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-utilities\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.369916 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbsd6\" (UniqueName: \"kubernetes.io/projected/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-kube-api-access-xbsd6\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.470513 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-catalog-content\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.470818 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-utilities\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.470940 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xbsd6\" (UniqueName: \"kubernetes.io/projected/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-kube-api-access-xbsd6\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.471323 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-utilities\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.471543 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-catalog-content\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.496233 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xbsd6\" (UniqueName: \"kubernetes.io/projected/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-kube-api-access-xbsd6\") pod \"redhat-operators-h58tc\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.634932 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.863566 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4rk2s"] Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.866615 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.876662 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n9bp\" (UniqueName: \"kubernetes.io/projected/a60d9261-e6d2-429e-a64f-7a870db9ecb3-kube-api-access-4n9bp\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.876735 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-utilities\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.876766 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-catalog-content\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.883085 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rk2s"] Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.978419 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4n9bp\" (UniqueName: \"kubernetes.io/projected/a60d9261-e6d2-429e-a64f-7a870db9ecb3-kube-api-access-4n9bp\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.978708 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-utilities\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.978738 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-catalog-content\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.979163 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-catalog-content\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.979367 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-utilities\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:12 crc kubenswrapper[4712]: I0130 20:15:12.997697 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4n9bp\" (UniqueName: \"kubernetes.io/projected/a60d9261-e6d2-429e-a64f-7a870db9ecb3-kube-api-access-4n9bp\") pod \"certified-operators-4rk2s\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:13 crc kubenswrapper[4712]: I0130 20:15:13.179225 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h58tc"] Jan 30 20:15:13 crc kubenswrapper[4712]: I0130 20:15:13.202571 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:13 crc kubenswrapper[4712]: I0130 20:15:13.583963 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4rk2s"] Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.155577 4712 generic.go:334] "Generic (PLEG): container finished" podID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerID="fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d" exitCode=0 Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.155627 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerDied","Data":"fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d"} Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.155899 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerStarted","Data":"3b01effcca0bcb79e102eb11778a5cdee63ac01ef24265c0dab67a494bd7ef46"} Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.157287 4712 generic.go:334] "Generic (PLEG): container finished" podID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerID="5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367" exitCode=0 Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.157342 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerDied","Data":"5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367"} Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.157374 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerStarted","Data":"6c1edd3e558c7d4645874b38a17f288bd6b71dc9bb8deb1f6e72e6f8d8de8755"} Jan 30 20:15:14 crc kubenswrapper[4712]: I0130 20:15:14.799909 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:15:14 crc kubenswrapper[4712]: E0130 20:15:14.800254 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:15:15 crc kubenswrapper[4712]: I0130 20:15:15.181564 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerStarted","Data":"99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280"} Jan 30 20:15:15 crc kubenswrapper[4712]: I0130 20:15:15.186627 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerStarted","Data":"20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c"} Jan 30 20:15:18 crc kubenswrapper[4712]: I0130 20:15:18.217467 4712 generic.go:334] "Generic (PLEG): container finished" podID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerID="20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c" exitCode=0 Jan 30 20:15:18 crc kubenswrapper[4712]: I0130 20:15:18.217582 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerDied","Data":"20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c"} Jan 30 20:15:19 crc kubenswrapper[4712]: I0130 20:15:19.236248 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerStarted","Data":"9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093"} Jan 30 20:15:19 crc kubenswrapper[4712]: I0130 20:15:19.273098 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4rk2s" podStartSLOduration=2.6897692429999998 podStartE2EDuration="7.273075209s" podCreationTimestamp="2026-01-30 20:15:12 +0000 UTC" firstStartedPulling="2026-01-30 20:15:14.158905827 +0000 UTC m=+12051.065915296" lastFinishedPulling="2026-01-30 20:15:18.742211783 +0000 UTC m=+12055.649221262" observedRunningTime="2026-01-30 20:15:19.268032547 +0000 UTC m=+12056.175042056" watchObservedRunningTime="2026-01-30 20:15:19.273075209 +0000 UTC m=+12056.180084718" Jan 30 20:15:21 crc kubenswrapper[4712]: I0130 20:15:21.262327 4712 generic.go:334] "Generic (PLEG): container finished" podID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerID="99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280" exitCode=0 Jan 30 20:15:21 crc kubenswrapper[4712]: I0130 20:15:21.262426 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerDied","Data":"99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280"} Jan 30 20:15:22 crc kubenswrapper[4712]: I0130 20:15:22.273876 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerStarted","Data":"11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e"} Jan 30 20:15:22 crc kubenswrapper[4712]: I0130 20:15:22.337214 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h58tc" podStartSLOduration=2.821367768 podStartE2EDuration="10.337191826s" podCreationTimestamp="2026-01-30 20:15:12 +0000 UTC" firstStartedPulling="2026-01-30 20:15:14.157916414 +0000 UTC m=+12051.064925883" lastFinishedPulling="2026-01-30 20:15:21.673740432 +0000 UTC m=+12058.580749941" observedRunningTime="2026-01-30 20:15:22.321317063 +0000 UTC m=+12059.228326522" watchObservedRunningTime="2026-01-30 20:15:22.337191826 +0000 UTC m=+12059.244201315" Jan 30 20:15:22 crc kubenswrapper[4712]: I0130 20:15:22.636518 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:22 crc kubenswrapper[4712]: I0130 20:15:22.636667 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:15:23 crc kubenswrapper[4712]: I0130 20:15:23.203081 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:23 crc kubenswrapper[4712]: I0130 20:15:23.203125 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:24 crc kubenswrapper[4712]: I0130 20:15:24.248147 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-4rk2s" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="registry-server" probeResult="failure" output=< Jan 30 20:15:24 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:15:24 crc kubenswrapper[4712]: > Jan 30 20:15:24 crc kubenswrapper[4712]: I0130 20:15:24.284119 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h58tc" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" probeResult="failure" output=< Jan 30 20:15:24 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:15:24 crc kubenswrapper[4712]: > Jan 30 20:15:26 crc kubenswrapper[4712]: I0130 20:15:26.799457 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:15:26 crc kubenswrapper[4712]: E0130 20:15:26.800217 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:15:33 crc kubenswrapper[4712]: I0130 20:15:33.268763 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:33 crc kubenswrapper[4712]: I0130 20:15:33.348524 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:33 crc kubenswrapper[4712]: I0130 20:15:33.507613 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rk2s"] Jan 30 20:15:33 crc kubenswrapper[4712]: I0130 20:15:33.698457 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h58tc" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" probeResult="failure" output=< Jan 30 20:15:33 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:15:33 crc kubenswrapper[4712]: > Jan 30 20:15:34 crc kubenswrapper[4712]: I0130 20:15:34.426912 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4rk2s" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="registry-server" containerID="cri-o://9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093" gracePeriod=2 Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.056065 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.171219 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-utilities" (OuterVolumeSpecName: "utilities") pod "a60d9261-e6d2-429e-a64f-7a870db9ecb3" (UID: "a60d9261-e6d2-429e-a64f-7a870db9ecb3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.171267 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-utilities\") pod \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.171399 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4n9bp\" (UniqueName: \"kubernetes.io/projected/a60d9261-e6d2-429e-a64f-7a870db9ecb3-kube-api-access-4n9bp\") pod \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.171439 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-catalog-content\") pod \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\" (UID: \"a60d9261-e6d2-429e-a64f-7a870db9ecb3\") " Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.172703 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.176635 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a60d9261-e6d2-429e-a64f-7a870db9ecb3-kube-api-access-4n9bp" (OuterVolumeSpecName: "kube-api-access-4n9bp") pod "a60d9261-e6d2-429e-a64f-7a870db9ecb3" (UID: "a60d9261-e6d2-429e-a64f-7a870db9ecb3"). InnerVolumeSpecName "kube-api-access-4n9bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.231900 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a60d9261-e6d2-429e-a64f-7a870db9ecb3" (UID: "a60d9261-e6d2-429e-a64f-7a870db9ecb3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.274585 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4n9bp\" (UniqueName: \"kubernetes.io/projected/a60d9261-e6d2-429e-a64f-7a870db9ecb3-kube-api-access-4n9bp\") on node \"crc\" DevicePath \"\"" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.274631 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a60d9261-e6d2-429e-a64f-7a870db9ecb3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.439882 4712 generic.go:334] "Generic (PLEG): container finished" podID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerID="9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093" exitCode=0 Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.439922 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerDied","Data":"9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093"} Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.439949 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4rk2s" event={"ID":"a60d9261-e6d2-429e-a64f-7a870db9ecb3","Type":"ContainerDied","Data":"6c1edd3e558c7d4645874b38a17f288bd6b71dc9bb8deb1f6e72e6f8d8de8755"} Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.439967 4712 scope.go:117] "RemoveContainer" containerID="9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.440005 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4rk2s" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.472523 4712 scope.go:117] "RemoveContainer" containerID="20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.492372 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4rk2s"] Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.506104 4712 scope.go:117] "RemoveContainer" containerID="5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.513512 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4rk2s"] Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.560151 4712 scope.go:117] "RemoveContainer" containerID="9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093" Jan 30 20:15:35 crc kubenswrapper[4712]: E0130 20:15:35.560554 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093\": container with ID starting with 9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093 not found: ID does not exist" containerID="9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.560594 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093"} err="failed to get container status \"9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093\": rpc error: code = NotFound desc = could not find container \"9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093\": container with ID starting with 9f413f9468faa23d01cb8b6416472425e07e14ee3f9722721a74179e29dc7093 not found: ID does not exist" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.560622 4712 scope.go:117] "RemoveContainer" containerID="20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c" Jan 30 20:15:35 crc kubenswrapper[4712]: E0130 20:15:35.560896 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c\": container with ID starting with 20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c not found: ID does not exist" containerID="20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.562215 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c"} err="failed to get container status \"20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c\": rpc error: code = NotFound desc = could not find container \"20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c\": container with ID starting with 20eec642d456880ad0331668724dcbc47d94cc3916c013e73357afe678936d5c not found: ID does not exist" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.562235 4712 scope.go:117] "RemoveContainer" containerID="5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367" Jan 30 20:15:35 crc kubenswrapper[4712]: E0130 20:15:35.562493 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367\": container with ID starting with 5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367 not found: ID does not exist" containerID="5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.562513 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367"} err="failed to get container status \"5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367\": rpc error: code = NotFound desc = could not find container \"5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367\": container with ID starting with 5f1d76011578eba9991bfd64c0fd9eb596ad629ffa9efe0c3c0b2179169dd367 not found: ID does not exist" Jan 30 20:15:35 crc kubenswrapper[4712]: I0130 20:15:35.814772 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" path="/var/lib/kubelet/pods/a60d9261-e6d2-429e-a64f-7a870db9ecb3/volumes" Jan 30 20:15:41 crc kubenswrapper[4712]: I0130 20:15:41.801333 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:15:41 crc kubenswrapper[4712]: E0130 20:15:41.802591 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:15:43 crc kubenswrapper[4712]: I0130 20:15:43.707142 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h58tc" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" probeResult="failure" output=< Jan 30 20:15:43 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:15:43 crc kubenswrapper[4712]: > Jan 30 20:15:53 crc kubenswrapper[4712]: I0130 20:15:53.687620 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h58tc" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" probeResult="failure" output=< Jan 30 20:15:53 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:15:53 crc kubenswrapper[4712]: > Jan 30 20:15:55 crc kubenswrapper[4712]: I0130 20:15:55.799391 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:15:55 crc kubenswrapper[4712]: E0130 20:15:55.799713 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:16:02 crc kubenswrapper[4712]: I0130 20:16:02.682720 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:16:02 crc kubenswrapper[4712]: I0130 20:16:02.740390 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:16:02 crc kubenswrapper[4712]: I0130 20:16:02.919393 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h58tc"] Jan 30 20:16:03 crc kubenswrapper[4712]: I0130 20:16:03.732691 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h58tc" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" containerID="cri-o://11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e" gracePeriod=2 Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.427526 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.525152 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbsd6\" (UniqueName: \"kubernetes.io/projected/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-kube-api-access-xbsd6\") pod \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.525393 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-catalog-content\") pod \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.525639 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-utilities\") pod \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\" (UID: \"e833c9a3-7eaf-468e-a6aa-9e98f33b0174\") " Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.527868 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-utilities" (OuterVolumeSpecName: "utilities") pod "e833c9a3-7eaf-468e-a6aa-9e98f33b0174" (UID: "e833c9a3-7eaf-468e-a6aa-9e98f33b0174"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.542442 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-kube-api-access-xbsd6" (OuterVolumeSpecName: "kube-api-access-xbsd6") pod "e833c9a3-7eaf-468e-a6aa-9e98f33b0174" (UID: "e833c9a3-7eaf-468e-a6aa-9e98f33b0174"). InnerVolumeSpecName "kube-api-access-xbsd6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.630926 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbsd6\" (UniqueName: \"kubernetes.io/projected/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-kube-api-access-xbsd6\") on node \"crc\" DevicePath \"\"" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.630985 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.635327 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e833c9a3-7eaf-468e-a6aa-9e98f33b0174" (UID: "e833c9a3-7eaf-468e-a6aa-9e98f33b0174"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.732736 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e833c9a3-7eaf-468e-a6aa-9e98f33b0174-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.751726 4712 generic.go:334] "Generic (PLEG): container finished" podID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerID="11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e" exitCode=0 Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.751766 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerDied","Data":"11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e"} Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.751810 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h58tc" event={"ID":"e833c9a3-7eaf-468e-a6aa-9e98f33b0174","Type":"ContainerDied","Data":"3b01effcca0bcb79e102eb11778a5cdee63ac01ef24265c0dab67a494bd7ef46"} Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.751813 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h58tc" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.751827 4712 scope.go:117] "RemoveContainer" containerID="11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.808846 4712 scope.go:117] "RemoveContainer" containerID="99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.832646 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h58tc"] Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.839211 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h58tc"] Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.857552 4712 scope.go:117] "RemoveContainer" containerID="fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.879065 4712 scope.go:117] "RemoveContainer" containerID="11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e" Jan 30 20:16:04 crc kubenswrapper[4712]: E0130 20:16:04.879376 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e\": container with ID starting with 11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e not found: ID does not exist" containerID="11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.879424 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e"} err="failed to get container status \"11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e\": rpc error: code = NotFound desc = could not find container \"11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e\": container with ID starting with 11a6f6e6c56d7b95d33ab3b0b2fc6d2aa76462942059fa66a7b88557e118505e not found: ID does not exist" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.879464 4712 scope.go:117] "RemoveContainer" containerID="99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280" Jan 30 20:16:04 crc kubenswrapper[4712]: E0130 20:16:04.880014 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280\": container with ID starting with 99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280 not found: ID does not exist" containerID="99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.880045 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280"} err="failed to get container status \"99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280\": rpc error: code = NotFound desc = could not find container \"99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280\": container with ID starting with 99d139cd3c74afee2e6dc60318e0e4bb328964fec3c2f75f1f4553bff3724280 not found: ID does not exist" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.880066 4712 scope.go:117] "RemoveContainer" containerID="fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d" Jan 30 20:16:04 crc kubenswrapper[4712]: E0130 20:16:04.880312 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d\": container with ID starting with fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d not found: ID does not exist" containerID="fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d" Jan 30 20:16:04 crc kubenswrapper[4712]: I0130 20:16:04.880332 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d"} err="failed to get container status \"fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d\": rpc error: code = NotFound desc = could not find container \"fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d\": container with ID starting with fbec4912ab8583eea9b488743c1142b53e7259c66937e65a82572e9769db4e1d not found: ID does not exist" Jan 30 20:16:05 crc kubenswrapper[4712]: I0130 20:16:05.811735 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" path="/var/lib/kubelet/pods/e833c9a3-7eaf-468e-a6aa-9e98f33b0174/volumes" Jan 30 20:16:07 crc kubenswrapper[4712]: I0130 20:16:07.800174 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:16:07 crc kubenswrapper[4712]: E0130 20:16:07.800583 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:16:21 crc kubenswrapper[4712]: I0130 20:16:21.800247 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:16:21 crc kubenswrapper[4712]: E0130 20:16:21.801355 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:16:36 crc kubenswrapper[4712]: I0130 20:16:36.800039 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:16:36 crc kubenswrapper[4712]: E0130 20:16:36.801425 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:16:50 crc kubenswrapper[4712]: I0130 20:16:50.800572 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:16:50 crc kubenswrapper[4712]: E0130 20:16:50.802082 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:17:01 crc kubenswrapper[4712]: I0130 20:17:01.802938 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:17:01 crc kubenswrapper[4712]: E0130 20:17:01.803682 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:17:16 crc kubenswrapper[4712]: I0130 20:17:16.800542 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:17:16 crc kubenswrapper[4712]: E0130 20:17:16.802495 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:17:30 crc kubenswrapper[4712]: I0130 20:17:30.800278 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:17:30 crc kubenswrapper[4712]: E0130 20:17:30.800912 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:17:44 crc kubenswrapper[4712]: I0130 20:17:44.800448 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:17:44 crc kubenswrapper[4712]: E0130 20:17:44.801500 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:17:58 crc kubenswrapper[4712]: I0130 20:17:58.800563 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:17:58 crc kubenswrapper[4712]: E0130 20:17:58.801485 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:18:11 crc kubenswrapper[4712]: I0130 20:18:11.800296 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:18:11 crc kubenswrapper[4712]: E0130 20:18:11.801329 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:18:26 crc kubenswrapper[4712]: I0130 20:18:26.800102 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:18:26 crc kubenswrapper[4712]: E0130 20:18:26.801066 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:18:39 crc kubenswrapper[4712]: I0130 20:18:39.800907 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:18:39 crc kubenswrapper[4712]: E0130 20:18:39.802027 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:18:50 crc kubenswrapper[4712]: I0130 20:18:50.799770 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:18:50 crc kubenswrapper[4712]: E0130 20:18:50.800602 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.851588 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-68974494f7-4p2dn"] Jan 30 20:18:52 crc kubenswrapper[4712]: E0130 20:18:52.852332 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="registry-server" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852348 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="registry-server" Jan 30 20:18:52 crc kubenswrapper[4712]: E0130 20:18:52.852373 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="extract-content" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852381 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="extract-content" Jan 30 20:18:52 crc kubenswrapper[4712]: E0130 20:18:52.852400 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852408 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" Jan 30 20:18:52 crc kubenswrapper[4712]: E0130 20:18:52.852426 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="extract-content" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852435 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="extract-content" Jan 30 20:18:52 crc kubenswrapper[4712]: E0130 20:18:52.852445 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="extract-utilities" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852453 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="extract-utilities" Jan 30 20:18:52 crc kubenswrapper[4712]: E0130 20:18:52.852486 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="extract-utilities" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852494 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="extract-utilities" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852716 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="a60d9261-e6d2-429e-a64f-7a870db9ecb3" containerName="registry-server" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.852747 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="e833c9a3-7eaf-468e-a6aa-9e98f33b0174" containerName="registry-server" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.855560 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.863498 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-internal-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.863811 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-httpd-config\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.863852 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-config\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.863951 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-public-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.864048 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-combined-ca-bundle\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.864226 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-ovndb-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.864323 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4hr\" (UniqueName: \"kubernetes.io/projected/0e1c8729-f223-4ed3-832d-b0848f7a401d-kube-api-access-dw4hr\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.918849 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68974494f7-4p2dn"] Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968143 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-ovndb-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968217 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4hr\" (UniqueName: \"kubernetes.io/projected/0e1c8729-f223-4ed3-832d-b0848f7a401d-kube-api-access-dw4hr\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968277 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-internal-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968314 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-httpd-config\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968342 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-config\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968393 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-public-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.968430 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-combined-ca-bundle\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.978646 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-combined-ca-bundle\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.978986 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-config\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.979622 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-httpd-config\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.986583 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4hr\" (UniqueName: \"kubernetes.io/projected/0e1c8729-f223-4ed3-832d-b0848f7a401d-kube-api-access-dw4hr\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.989243 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-ovndb-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:52 crc kubenswrapper[4712]: I0130 20:18:52.989306 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-internal-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:53 crc kubenswrapper[4712]: I0130 20:18:53.000749 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0e1c8729-f223-4ed3-832d-b0848f7a401d-public-tls-certs\") pod \"neutron-68974494f7-4p2dn\" (UID: \"0e1c8729-f223-4ed3-832d-b0848f7a401d\") " pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:53 crc kubenswrapper[4712]: I0130 20:18:53.175133 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:54 crc kubenswrapper[4712]: I0130 20:18:54.131683 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-68974494f7-4p2dn"] Jan 30 20:18:54 crc kubenswrapper[4712]: W0130 20:18:54.148613 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e1c8729_f223_4ed3_832d_b0848f7a401d.slice/crio-09359065f658a4a5fdf1ec1642ab08f4be477bd0b54e7cc5fefc01bcd5853ee8 WatchSource:0}: Error finding container 09359065f658a4a5fdf1ec1642ab08f4be477bd0b54e7cc5fefc01bcd5853ee8: Status 404 returned error can't find the container with id 09359065f658a4a5fdf1ec1642ab08f4be477bd0b54e7cc5fefc01bcd5853ee8 Jan 30 20:18:54 crc kubenswrapper[4712]: I0130 20:18:54.642635 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68974494f7-4p2dn" event={"ID":"0e1c8729-f223-4ed3-832d-b0848f7a401d","Type":"ContainerStarted","Data":"0694bd772a697018c854c6b30cf34b226baf59bb3f473dde774bc95f0d07daad"} Jan 30 20:18:54 crc kubenswrapper[4712]: I0130 20:18:54.642987 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:18:54 crc kubenswrapper[4712]: I0130 20:18:54.642999 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68974494f7-4p2dn" event={"ID":"0e1c8729-f223-4ed3-832d-b0848f7a401d","Type":"ContainerStarted","Data":"0382a43440c54ee86c658ab51666f57dd90d86ba94e111575aa893d1d4739946"} Jan 30 20:18:54 crc kubenswrapper[4712]: I0130 20:18:54.643008 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-68974494f7-4p2dn" event={"ID":"0e1c8729-f223-4ed3-832d-b0848f7a401d","Type":"ContainerStarted","Data":"09359065f658a4a5fdf1ec1642ab08f4be477bd0b54e7cc5fefc01bcd5853ee8"} Jan 30 20:18:54 crc kubenswrapper[4712]: I0130 20:18:54.667502 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-68974494f7-4p2dn" podStartSLOduration=2.667485029 podStartE2EDuration="2.667485029s" podCreationTimestamp="2026-01-30 20:18:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 20:18:54.661791582 +0000 UTC m=+12271.568801061" watchObservedRunningTime="2026-01-30 20:18:54.667485029 +0000 UTC m=+12271.574494498" Jan 30 20:19:05 crc kubenswrapper[4712]: I0130 20:19:05.800222 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:19:05 crc kubenswrapper[4712]: E0130 20:19:05.800876 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:19:17 crc kubenswrapper[4712]: I0130 20:19:17.800920 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:19:18 crc kubenswrapper[4712]: I0130 20:19:18.891193 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"3a53d063c3fa0b150219f6b9873fe223b8e25867e44a92b5caea68f5711f9622"} Jan 30 20:19:23 crc kubenswrapper[4712]: I0130 20:19:23.203429 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-68974494f7-4p2dn" Jan 30 20:19:23 crc kubenswrapper[4712]: I0130 20:19:23.334166 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-699f8d5569-8nzql"] Jan 30 20:19:23 crc kubenswrapper[4712]: I0130 20:19:23.334906 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-699f8d5569-8nzql" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-httpd" containerID="cri-o://8ea516fd61a78dbd402b121b04ed7cb098ef6bb1f8d1128f32572e10c2bd59be" gracePeriod=30 Jan 30 20:19:23 crc kubenswrapper[4712]: I0130 20:19:23.334461 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-699f8d5569-8nzql" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-api" containerID="cri-o://d0b99af23d3ed8e3fa063c1c21bdf1ac906fe9e03e2e22f7654685605cd8d065" gracePeriod=30 Jan 30 20:19:23 crc kubenswrapper[4712]: I0130 20:19:23.939081 4712 generic.go:334] "Generic (PLEG): container finished" podID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerID="8ea516fd61a78dbd402b121b04ed7cb098ef6bb1f8d1128f32572e10c2bd59be" exitCode=0 Jan 30 20:19:23 crc kubenswrapper[4712]: I0130 20:19:23.939125 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699f8d5569-8nzql" event={"ID":"0f499430-9aa9-4145-a241-1d02ee2b2d72","Type":"ContainerDied","Data":"8ea516fd61a78dbd402b121b04ed7cb098ef6bb1f8d1128f32572e10c2bd59be"} Jan 30 20:19:24 crc kubenswrapper[4712]: I0130 20:19:24.732032 4712 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-699f8d5569-8nzql" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.1.36:9696/\": dial tcp 10.217.1.36:9696: connect: connection refused" Jan 30 20:19:25 crc kubenswrapper[4712]: I0130 20:19:25.972364 4712 generic.go:334] "Generic (PLEG): container finished" podID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerID="d0b99af23d3ed8e3fa063c1c21bdf1ac906fe9e03e2e22f7654685605cd8d065" exitCode=0 Jan 30 20:19:25 crc kubenswrapper[4712]: I0130 20:19:25.972406 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699f8d5569-8nzql" event={"ID":"0f499430-9aa9-4145-a241-1d02ee2b2d72","Type":"ContainerDied","Data":"d0b99af23d3ed8e3fa063c1c21bdf1ac906fe9e03e2e22f7654685605cd8d065"} Jan 30 20:19:25 crc kubenswrapper[4712]: I0130 20:19:25.972431 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-699f8d5569-8nzql" event={"ID":"0f499430-9aa9-4145-a241-1d02ee2b2d72","Type":"ContainerDied","Data":"2921c67b532ff412653802560f04f1eb0e2bf379cf080fe7e6baa06ade1145e6"} Jan 30 20:19:25 crc kubenswrapper[4712]: I0130 20:19:25.972442 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2921c67b532ff412653802560f04f1eb0e2bf379cf080fe7e6baa06ade1145e6" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.035991 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699f8d5569-8nzql" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.116676 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-httpd-config\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.116781 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-public-tls-certs\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.116861 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-config\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.116932 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-ovndb-tls-certs\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.117005 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqhz9\" (UniqueName: \"kubernetes.io/projected/0f499430-9aa9-4145-a241-1d02ee2b2d72-kube-api-access-mqhz9\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.117026 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-combined-ca-bundle\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.117086 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-internal-tls-certs\") pod \"0f499430-9aa9-4145-a241-1d02ee2b2d72\" (UID: \"0f499430-9aa9-4145-a241-1d02ee2b2d72\") " Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.135215 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.136051 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f499430-9aa9-4145-a241-1d02ee2b2d72-kube-api-access-mqhz9" (OuterVolumeSpecName: "kube-api-access-mqhz9") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "kube-api-access-mqhz9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.173969 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.181269 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.188566 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-config" (OuterVolumeSpecName: "config") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.198586 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.201425 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0f499430-9aa9-4145-a241-1d02ee2b2d72" (UID: "0f499430-9aa9-4145-a241-1d02ee2b2d72"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218804 4712 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218830 4712 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218842 4712 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-config\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218852 4712 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218860 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqhz9\" (UniqueName: \"kubernetes.io/projected/0f499430-9aa9-4145-a241-1d02ee2b2d72-kube-api-access-mqhz9\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218868 4712 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.218875 4712 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0f499430-9aa9-4145-a241-1d02ee2b2d72-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 30 20:19:26 crc kubenswrapper[4712]: I0130 20:19:26.981645 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-699f8d5569-8nzql" Jan 30 20:19:27 crc kubenswrapper[4712]: I0130 20:19:27.027048 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-699f8d5569-8nzql"] Jan 30 20:19:27 crc kubenswrapper[4712]: I0130 20:19:27.037952 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-699f8d5569-8nzql"] Jan 30 20:19:27 crc kubenswrapper[4712]: I0130 20:19:27.818542 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" path="/var/lib/kubelet/pods/0f499430-9aa9-4145-a241-1d02ee2b2d72/volumes" Jan 30 20:20:06 crc kubenswrapper[4712]: I0130 20:20:06.871149 4712 scope.go:117] "RemoveContainer" containerID="d0b99af23d3ed8e3fa063c1c21bdf1ac906fe9e03e2e22f7654685605cd8d065" Jan 30 20:20:06 crc kubenswrapper[4712]: I0130 20:20:06.914503 4712 scope.go:117] "RemoveContainer" containerID="8ea516fd61a78dbd402b121b04ed7cb098ef6bb1f8d1128f32572e10c2bd59be" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.428340 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kcw26/must-gather-tgzqk"] Jan 30 20:20:42 crc kubenswrapper[4712]: E0130 20:20:42.435097 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-api" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.435185 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-api" Jan 30 20:20:42 crc kubenswrapper[4712]: E0130 20:20:42.435263 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-httpd" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.435322 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-httpd" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.435605 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-httpd" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.435691 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f499430-9aa9-4145-a241-1d02ee2b2d72" containerName="neutron-api" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.441835 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.449717 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-kcw26"/"default-dockercfg-blvgf" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.449977 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kcw26"/"openshift-service-ca.crt" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.450108 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-kcw26"/"kube-root-ca.crt" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.464133 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kcw26/must-gather-tgzqk"] Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.568001 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-must-gather-output\") pod \"must-gather-tgzqk\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.568059 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2rmg\" (UniqueName: \"kubernetes.io/projected/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-kube-api-access-d2rmg\") pod \"must-gather-tgzqk\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.670396 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-must-gather-output\") pod \"must-gather-tgzqk\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.670441 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2rmg\" (UniqueName: \"kubernetes.io/projected/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-kube-api-access-d2rmg\") pod \"must-gather-tgzqk\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.671703 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-must-gather-output\") pod \"must-gather-tgzqk\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.696119 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2rmg\" (UniqueName: \"kubernetes.io/projected/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-kube-api-access-d2rmg\") pod \"must-gather-tgzqk\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:42 crc kubenswrapper[4712]: I0130 20:20:42.763736 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:20:43 crc kubenswrapper[4712]: I0130 20:20:43.288005 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-kcw26/must-gather-tgzqk"] Jan 30 20:20:43 crc kubenswrapper[4712]: I0130 20:20:43.319214 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 20:20:43 crc kubenswrapper[4712]: I0130 20:20:43.757893 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/must-gather-tgzqk" event={"ID":"222ccb2d-5a6d-4378-a07a-996aed6ec5a8","Type":"ContainerStarted","Data":"ead0d12c1ffac230ee2724f9043552ac872d24f46b20c1c6598582fa292fe512"} Jan 30 20:20:50 crc kubenswrapper[4712]: I0130 20:20:50.835885 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/must-gather-tgzqk" event={"ID":"222ccb2d-5a6d-4378-a07a-996aed6ec5a8","Type":"ContainerStarted","Data":"bc735a24fec9129d6a15e5c02cd0e6fea7aeb70a4a9003cfce8a07e5d41d9986"} Jan 30 20:20:50 crc kubenswrapper[4712]: I0130 20:20:50.836416 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/must-gather-tgzqk" event={"ID":"222ccb2d-5a6d-4378-a07a-996aed6ec5a8","Type":"ContainerStarted","Data":"b012278c0e412214fc24b739970485e6a7d2670a7dae92b04f20dfc58ff8d016"} Jan 30 20:20:50 crc kubenswrapper[4712]: I0130 20:20:50.852924 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kcw26/must-gather-tgzqk" podStartSLOduration=2.7055020709999997 podStartE2EDuration="8.852907377s" podCreationTimestamp="2026-01-30 20:20:42 +0000 UTC" firstStartedPulling="2026-01-30 20:20:43.317244611 +0000 UTC m=+12380.224254080" lastFinishedPulling="2026-01-30 20:20:49.464649907 +0000 UTC m=+12386.371659386" observedRunningTime="2026-01-30 20:20:50.850994412 +0000 UTC m=+12387.758003881" watchObservedRunningTime="2026-01-30 20:20:50.852907377 +0000 UTC m=+12387.759916846" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.392481 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kcw26/crc-debug-gddmm"] Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.393960 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.449005 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14e04faf-17c7-4826-aaab-a7a105e5fafb-host\") pod \"crc-debug-gddmm\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.449275 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbqw7\" (UniqueName: \"kubernetes.io/projected/14e04faf-17c7-4826-aaab-a7a105e5fafb-kube-api-access-tbqw7\") pod \"crc-debug-gddmm\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.550946 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14e04faf-17c7-4826-aaab-a7a105e5fafb-host\") pod \"crc-debug-gddmm\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.551017 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbqw7\" (UniqueName: \"kubernetes.io/projected/14e04faf-17c7-4826-aaab-a7a105e5fafb-kube-api-access-tbqw7\") pod \"crc-debug-gddmm\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.551100 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14e04faf-17c7-4826-aaab-a7a105e5fafb-host\") pod \"crc-debug-gddmm\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.570740 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbqw7\" (UniqueName: \"kubernetes.io/projected/14e04faf-17c7-4826-aaab-a7a105e5fafb-kube-api-access-tbqw7\") pod \"crc-debug-gddmm\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.718010 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:20:56 crc kubenswrapper[4712]: W0130 20:20:56.768873 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14e04faf_17c7_4826_aaab_a7a105e5fafb.slice/crio-3b154fd12a681524b2f0c570814f7df217121be0fe9d0150e2755a41949ce3fc WatchSource:0}: Error finding container 3b154fd12a681524b2f0c570814f7df217121be0fe9d0150e2755a41949ce3fc: Status 404 returned error can't find the container with id 3b154fd12a681524b2f0c570814f7df217121be0fe9d0150e2755a41949ce3fc Jan 30 20:20:56 crc kubenswrapper[4712]: I0130 20:20:56.894044 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-gddmm" event={"ID":"14e04faf-17c7-4826-aaab-a7a105e5fafb","Type":"ContainerStarted","Data":"3b154fd12a681524b2f0c570814f7df217121be0fe9d0150e2755a41949ce3fc"} Jan 30 20:21:13 crc kubenswrapper[4712]: I0130 20:21:13.041471 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-gddmm" event={"ID":"14e04faf-17c7-4826-aaab-a7a105e5fafb","Type":"ContainerStarted","Data":"39ffc6d82568303bc13877099f7dfefcb94199edce61bac05bc8bba2acfff1a5"} Jan 30 20:21:13 crc kubenswrapper[4712]: I0130 20:21:13.067048 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kcw26/crc-debug-gddmm" podStartSLOduration=1.711736406 podStartE2EDuration="17.067033172s" podCreationTimestamp="2026-01-30 20:20:56 +0000 UTC" firstStartedPulling="2026-01-30 20:20:56.770776367 +0000 UTC m=+12393.677785836" lastFinishedPulling="2026-01-30 20:21:12.126073133 +0000 UTC m=+12409.033082602" observedRunningTime="2026-01-30 20:21:13.063813654 +0000 UTC m=+12409.970823123" watchObservedRunningTime="2026-01-30 20:21:13.067033172 +0000 UTC m=+12409.974042641" Jan 30 20:21:36 crc kubenswrapper[4712]: I0130 20:21:36.271156 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:21:36 crc kubenswrapper[4712]: I0130 20:21:36.272523 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:22:06 crc kubenswrapper[4712]: I0130 20:22:06.271366 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:22:06 crc kubenswrapper[4712]: I0130 20:22:06.271920 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:22:09 crc kubenswrapper[4712]: I0130 20:22:09.531034 4712 generic.go:334] "Generic (PLEG): container finished" podID="14e04faf-17c7-4826-aaab-a7a105e5fafb" containerID="39ffc6d82568303bc13877099f7dfefcb94199edce61bac05bc8bba2acfff1a5" exitCode=0 Jan 30 20:22:09 crc kubenswrapper[4712]: I0130 20:22:09.531104 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-gddmm" event={"ID":"14e04faf-17c7-4826-aaab-a7a105e5fafb","Type":"ContainerDied","Data":"39ffc6d82568303bc13877099f7dfefcb94199edce61bac05bc8bba2acfff1a5"} Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.663059 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.710575 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kcw26/crc-debug-gddmm"] Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.722389 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kcw26/crc-debug-gddmm"] Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.772000 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14e04faf-17c7-4826-aaab-a7a105e5fafb-host\") pod \"14e04faf-17c7-4826-aaab-a7a105e5fafb\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.772056 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbqw7\" (UniqueName: \"kubernetes.io/projected/14e04faf-17c7-4826-aaab-a7a105e5fafb-kube-api-access-tbqw7\") pod \"14e04faf-17c7-4826-aaab-a7a105e5fafb\" (UID: \"14e04faf-17c7-4826-aaab-a7a105e5fafb\") " Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.772139 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e04faf-17c7-4826-aaab-a7a105e5fafb-host" (OuterVolumeSpecName: "host") pod "14e04faf-17c7-4826-aaab-a7a105e5fafb" (UID: "14e04faf-17c7-4826-aaab-a7a105e5fafb"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.772742 4712 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/14e04faf-17c7-4826-aaab-a7a105e5fafb-host\") on node \"crc\" DevicePath \"\"" Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.789429 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e04faf-17c7-4826-aaab-a7a105e5fafb-kube-api-access-tbqw7" (OuterVolumeSpecName: "kube-api-access-tbqw7") pod "14e04faf-17c7-4826-aaab-a7a105e5fafb" (UID: "14e04faf-17c7-4826-aaab-a7a105e5fafb"). InnerVolumeSpecName "kube-api-access-tbqw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:22:10 crc kubenswrapper[4712]: I0130 20:22:10.875601 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbqw7\" (UniqueName: \"kubernetes.io/projected/14e04faf-17c7-4826-aaab-a7a105e5fafb-kube-api-access-tbqw7\") on node \"crc\" DevicePath \"\"" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.554883 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b154fd12a681524b2f0c570814f7df217121be0fe9d0150e2755a41949ce3fc" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.554986 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-gddmm" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.811485 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e04faf-17c7-4826-aaab-a7a105e5fafb" path="/var/lib/kubelet/pods/14e04faf-17c7-4826-aaab-a7a105e5fafb/volumes" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.911024 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kcw26/crc-debug-kstzt"] Jan 30 20:22:11 crc kubenswrapper[4712]: E0130 20:22:11.911392 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e04faf-17c7-4826-aaab-a7a105e5fafb" containerName="container-00" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.911410 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e04faf-17c7-4826-aaab-a7a105e5fafb" containerName="container-00" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.911603 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e04faf-17c7-4826-aaab-a7a105e5fafb" containerName="container-00" Jan 30 20:22:11 crc kubenswrapper[4712]: I0130 20:22:11.912211 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.006109 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b396221-c734-4e1d-9016-58d21578a0ae-host\") pod \"crc-debug-kstzt\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.006585 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2fqd\" (UniqueName: \"kubernetes.io/projected/9b396221-c734-4e1d-9016-58d21578a0ae-kube-api-access-n2fqd\") pod \"crc-debug-kstzt\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.109050 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2fqd\" (UniqueName: \"kubernetes.io/projected/9b396221-c734-4e1d-9016-58d21578a0ae-kube-api-access-n2fqd\") pod \"crc-debug-kstzt\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.109405 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b396221-c734-4e1d-9016-58d21578a0ae-host\") pod \"crc-debug-kstzt\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.109540 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b396221-c734-4e1d-9016-58d21578a0ae-host\") pod \"crc-debug-kstzt\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.142722 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2fqd\" (UniqueName: \"kubernetes.io/projected/9b396221-c734-4e1d-9016-58d21578a0ae-kube-api-access-n2fqd\") pod \"crc-debug-kstzt\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.242291 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.570743 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-kstzt" event={"ID":"9b396221-c734-4e1d-9016-58d21578a0ae","Type":"ContainerStarted","Data":"235d0d779afaf68ae0c0f7629142b32701ddd2b3741089acf5d702235273313c"} Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.571351 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-kstzt" event={"ID":"9b396221-c734-4e1d-9016-58d21578a0ae","Type":"ContainerStarted","Data":"ff4ad13cdad641548f5f54aa5c69defa8f74869cb984928984848329ae4cbe68"} Jan 30 20:22:12 crc kubenswrapper[4712]: I0130 20:22:12.609398 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-kcw26/crc-debug-kstzt" podStartSLOduration=1.6093666039999999 podStartE2EDuration="1.609366604s" podCreationTimestamp="2026-01-30 20:22:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 20:22:12.593855129 +0000 UTC m=+12469.500864628" watchObservedRunningTime="2026-01-30 20:22:12.609366604 +0000 UTC m=+12469.516376113" Jan 30 20:22:13 crc kubenswrapper[4712]: I0130 20:22:13.579125 4712 generic.go:334] "Generic (PLEG): container finished" podID="9b396221-c734-4e1d-9016-58d21578a0ae" containerID="235d0d779afaf68ae0c0f7629142b32701ddd2b3741089acf5d702235273313c" exitCode=0 Jan 30 20:22:13 crc kubenswrapper[4712]: I0130 20:22:13.579172 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-kstzt" event={"ID":"9b396221-c734-4e1d-9016-58d21578a0ae","Type":"ContainerDied","Data":"235d0d779afaf68ae0c0f7629142b32701ddd2b3741089acf5d702235273313c"} Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.677165 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.852580 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2fqd\" (UniqueName: \"kubernetes.io/projected/9b396221-c734-4e1d-9016-58d21578a0ae-kube-api-access-n2fqd\") pod \"9b396221-c734-4e1d-9016-58d21578a0ae\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.853242 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b396221-c734-4e1d-9016-58d21578a0ae-host\") pod \"9b396221-c734-4e1d-9016-58d21578a0ae\" (UID: \"9b396221-c734-4e1d-9016-58d21578a0ae\") " Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.853310 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b396221-c734-4e1d-9016-58d21578a0ae-host" (OuterVolumeSpecName: "host") pod "9b396221-c734-4e1d-9016-58d21578a0ae" (UID: "9b396221-c734-4e1d-9016-58d21578a0ae"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.854054 4712 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9b396221-c734-4e1d-9016-58d21578a0ae-host\") on node \"crc\" DevicePath \"\"" Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.858676 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b396221-c734-4e1d-9016-58d21578a0ae-kube-api-access-n2fqd" (OuterVolumeSpecName: "kube-api-access-n2fqd") pod "9b396221-c734-4e1d-9016-58d21578a0ae" (UID: "9b396221-c734-4e1d-9016-58d21578a0ae"). InnerVolumeSpecName "kube-api-access-n2fqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.958026 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2fqd\" (UniqueName: \"kubernetes.io/projected/9b396221-c734-4e1d-9016-58d21578a0ae-kube-api-access-n2fqd\") on node \"crc\" DevicePath \"\"" Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.976190 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kcw26/crc-debug-kstzt"] Jan 30 20:22:14 crc kubenswrapper[4712]: I0130 20:22:14.986215 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kcw26/crc-debug-kstzt"] Jan 30 20:22:15 crc kubenswrapper[4712]: I0130 20:22:15.601553 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff4ad13cdad641548f5f54aa5c69defa8f74869cb984928984848329ae4cbe68" Jan 30 20:22:15 crc kubenswrapper[4712]: I0130 20:22:15.601690 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-kstzt" Jan 30 20:22:15 crc kubenswrapper[4712]: I0130 20:22:15.815799 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b396221-c734-4e1d-9016-58d21578a0ae" path="/var/lib/kubelet/pods/9b396221-c734-4e1d-9016-58d21578a0ae/volumes" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.226208 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-kcw26/crc-debug-c64hz"] Jan 30 20:22:16 crc kubenswrapper[4712]: E0130 20:22:16.227293 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b396221-c734-4e1d-9016-58d21578a0ae" containerName="container-00" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.227400 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b396221-c734-4e1d-9016-58d21578a0ae" containerName="container-00" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.227766 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b396221-c734-4e1d-9016-58d21578a0ae" containerName="container-00" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.228639 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.384400 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f9750b-a249-4cc7-bbeb-7283b44035ce-host\") pod \"crc-debug-c64hz\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.384490 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dk6q\" (UniqueName: \"kubernetes.io/projected/55f9750b-a249-4cc7-bbeb-7283b44035ce-kube-api-access-8dk6q\") pod \"crc-debug-c64hz\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.486489 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f9750b-a249-4cc7-bbeb-7283b44035ce-host\") pod \"crc-debug-c64hz\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.486679 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f9750b-a249-4cc7-bbeb-7283b44035ce-host\") pod \"crc-debug-c64hz\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.487281 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8dk6q\" (UniqueName: \"kubernetes.io/projected/55f9750b-a249-4cc7-bbeb-7283b44035ce-kube-api-access-8dk6q\") pod \"crc-debug-c64hz\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.523230 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dk6q\" (UniqueName: \"kubernetes.io/projected/55f9750b-a249-4cc7-bbeb-7283b44035ce-kube-api-access-8dk6q\") pod \"crc-debug-c64hz\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: I0130 20:22:16.546694 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:16 crc kubenswrapper[4712]: W0130 20:22:16.646739 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod55f9750b_a249_4cc7_bbeb_7283b44035ce.slice/crio-230d7682c175956edbfb562741302f1d6ed33be6fb9c0c873be799ff80b07715 WatchSource:0}: Error finding container 230d7682c175956edbfb562741302f1d6ed33be6fb9c0c873be799ff80b07715: Status 404 returned error can't find the container with id 230d7682c175956edbfb562741302f1d6ed33be6fb9c0c873be799ff80b07715 Jan 30 20:22:17 crc kubenswrapper[4712]: I0130 20:22:17.645827 4712 generic.go:334] "Generic (PLEG): container finished" podID="55f9750b-a249-4cc7-bbeb-7283b44035ce" containerID="7a83b101133482cc21bee82635b23146f04c04becece347160389bef6bee4f6b" exitCode=0 Jan 30 20:22:17 crc kubenswrapper[4712]: I0130 20:22:17.645927 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-c64hz" event={"ID":"55f9750b-a249-4cc7-bbeb-7283b44035ce","Type":"ContainerDied","Data":"7a83b101133482cc21bee82635b23146f04c04becece347160389bef6bee4f6b"} Jan 30 20:22:17 crc kubenswrapper[4712]: I0130 20:22:17.646220 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/crc-debug-c64hz" event={"ID":"55f9750b-a249-4cc7-bbeb-7283b44035ce","Type":"ContainerStarted","Data":"230d7682c175956edbfb562741302f1d6ed33be6fb9c0c873be799ff80b07715"} Jan 30 20:22:17 crc kubenswrapper[4712]: I0130 20:22:17.714063 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kcw26/crc-debug-c64hz"] Jan 30 20:22:17 crc kubenswrapper[4712]: I0130 20:22:17.721096 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kcw26/crc-debug-c64hz"] Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.744146 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.836669 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dk6q\" (UniqueName: \"kubernetes.io/projected/55f9750b-a249-4cc7-bbeb-7283b44035ce-kube-api-access-8dk6q\") pod \"55f9750b-a249-4cc7-bbeb-7283b44035ce\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.842291 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f9750b-a249-4cc7-bbeb-7283b44035ce-kube-api-access-8dk6q" (OuterVolumeSpecName: "kube-api-access-8dk6q") pod "55f9750b-a249-4cc7-bbeb-7283b44035ce" (UID: "55f9750b-a249-4cc7-bbeb-7283b44035ce"). InnerVolumeSpecName "kube-api-access-8dk6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.938992 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f9750b-a249-4cc7-bbeb-7283b44035ce-host\") pod \"55f9750b-a249-4cc7-bbeb-7283b44035ce\" (UID: \"55f9750b-a249-4cc7-bbeb-7283b44035ce\") " Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.939037 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/55f9750b-a249-4cc7-bbeb-7283b44035ce-host" (OuterVolumeSpecName: "host") pod "55f9750b-a249-4cc7-bbeb-7283b44035ce" (UID: "55f9750b-a249-4cc7-bbeb-7283b44035ce"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.939602 4712 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/55f9750b-a249-4cc7-bbeb-7283b44035ce-host\") on node \"crc\" DevicePath \"\"" Jan 30 20:22:18 crc kubenswrapper[4712]: I0130 20:22:18.939616 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8dk6q\" (UniqueName: \"kubernetes.io/projected/55f9750b-a249-4cc7-bbeb-7283b44035ce-kube-api-access-8dk6q\") on node \"crc\" DevicePath \"\"" Jan 30 20:22:19 crc kubenswrapper[4712]: I0130 20:22:19.668185 4712 scope.go:117] "RemoveContainer" containerID="7a83b101133482cc21bee82635b23146f04c04becece347160389bef6bee4f6b" Jan 30 20:22:19 crc kubenswrapper[4712]: I0130 20:22:19.668237 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/crc-debug-c64hz" Jan 30 20:22:19 crc kubenswrapper[4712]: I0130 20:22:19.809968 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f9750b-a249-4cc7-bbeb-7283b44035ce" path="/var/lib/kubelet/pods/55f9750b-a249-4cc7-bbeb-7283b44035ce/volumes" Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.270665 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.271161 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.271201 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.272489 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a53d063c3fa0b150219f6b9873fe223b8e25867e44a92b5caea68f5711f9622"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.272548 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://3a53d063c3fa0b150219f6b9873fe223b8e25867e44a92b5caea68f5711f9622" gracePeriod=600 Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.843684 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="3a53d063c3fa0b150219f6b9873fe223b8e25867e44a92b5caea68f5711f9622" exitCode=0 Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.843984 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"3a53d063c3fa0b150219f6b9873fe223b8e25867e44a92b5caea68f5711f9622"} Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.844016 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569"} Jan 30 20:22:36 crc kubenswrapper[4712]: I0130 20:22:36.844032 4712 scope.go:117] "RemoveContainer" containerID="50851518c5bcd7c0037b746cec7b4666c7f6edb15f8d8f0146daf8663f9a2eff" Jan 30 20:22:37 crc kubenswrapper[4712]: I0130 20:22:37.426696 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84958ddfbd-52vdv_777b9322-044d-4461-9d82-9854438205fc/barbican-api/0.log" Jan 30 20:22:37 crc kubenswrapper[4712]: I0130 20:22:37.505133 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-84958ddfbd-52vdv_777b9322-044d-4461-9d82-9854438205fc/barbican-api-log/0.log" Jan 30 20:22:37 crc kubenswrapper[4712]: I0130 20:22:37.646099 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57c9fd48b-mnwmt_49bb97a8-9dba-4ebf-9196-812577411892/barbican-keystone-listener/0.log" Jan 30 20:22:37 crc kubenswrapper[4712]: I0130 20:22:37.805435 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-57c9fd48b-mnwmt_49bb97a8-9dba-4ebf-9196-812577411892/barbican-keystone-listener-log/0.log" Jan 30 20:22:37 crc kubenswrapper[4712]: I0130 20:22:37.920346 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-75b8cdc675-hwkng_7441ba42-3158-40d9-9a91-467fef6769cd/barbican-worker/0.log" Jan 30 20:22:37 crc kubenswrapper[4712]: I0130 20:22:37.942047 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-75b8cdc675-hwkng_7441ba42-3158-40d9-9a91-467fef6769cd/barbican-worker-log/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.082177 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-cwhqs_03922579-00da-4ea3-ba7e-efeb5062632f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.282444 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d28763e8-26ec-4ba2-b944-1c84c2b81bf0/ceilometer-central-agent/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.411731 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d28763e8-26ec-4ba2-b944-1c84c2b81bf0/ceilometer-notification-agent/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.490744 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d28763e8-26ec-4ba2-b944-1c84c2b81bf0/proxy-httpd/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.585502 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_d28763e8-26ec-4ba2-b944-1c84c2b81bf0/sg-core/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.786257 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_adaaf313-4d60-4bbb-b4a9-8e0faddc265f/cinder-api-log/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.854254 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_adaaf313-4d60-4bbb-b4a9-8e0faddc265f/cinder-api/0.log" Jan 30 20:22:38 crc kubenswrapper[4712]: I0130 20:22:38.974184 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6e0d9187-34f3-4d93-a189-264ff4cc933d/cinder-scheduler/1.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.127551 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6e0d9187-34f3-4d93-a189-264ff4cc933d/cinder-scheduler/0.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.278776 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_6e0d9187-34f3-4d93-a189-264ff4cc933d/probe/0.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.326825 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-ldwj4_f19f0b0d-9323-44d3-9098-0b0e462f4015/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.427531 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-f8mdz_900e21ae-3c90-4e70-90e5-fbe81a902929/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.559578 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66498674f5-zng48_63fe393d-be88-472a-8f77-0c395d5fdf6b/init/0.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.794656 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66498674f5-zng48_63fe393d-be88-472a-8f77-0c395d5fdf6b/init/0.log" Jan 30 20:22:39 crc kubenswrapper[4712]: I0130 20:22:39.923019 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-kvxpz_6628bf15-f827-4b97-a95e-7ad66750f5db/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:40 crc kubenswrapper[4712]: I0130 20:22:40.063425 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-66498674f5-zng48_63fe393d-be88-472a-8f77-0c395d5fdf6b/dnsmasq-dns/0.log" Jan 30 20:22:40 crc kubenswrapper[4712]: I0130 20:22:40.164258 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_91919356-125c-4caa-8504-a0ead9ce783e/glance-httpd/0.log" Jan 30 20:22:40 crc kubenswrapper[4712]: I0130 20:22:40.242922 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_91919356-125c-4caa-8504-a0ead9ce783e/glance-log/0.log" Jan 30 20:22:40 crc kubenswrapper[4712]: I0130 20:22:40.441056 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5c0adde1-5eac-4634-8df8-ff23f73da79b/glance-log/0.log" Jan 30 20:22:40 crc kubenswrapper[4712]: I0130 20:22:40.454355 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_5c0adde1-5eac-4634-8df8-ff23f73da79b/glance-httpd/0.log" Jan 30 20:22:41 crc kubenswrapper[4712]: I0130 20:22:41.058556 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-68c577d787-bljqj_b9473151-e9e1-4388-8134-fb8fd45d0257/heat-engine/0.log" Jan 30 20:22:41 crc kubenswrapper[4712]: I0130 20:22:41.620624 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64655dbc44-pvj2c_6a28b495-ecf0-409e-9558-ee794a46dbd1/horizon/4.log" Jan 30 20:22:41 crc kubenswrapper[4712]: I0130 20:22:41.846168 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64655dbc44-pvj2c_6a28b495-ecf0-409e-9558-ee794a46dbd1/horizon/3.log" Jan 30 20:22:42 crc kubenswrapper[4712]: I0130 20:22:42.201234 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-qsnth_1273af18-d0dd-4c8e-a454-097a3f00110d/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:42 crc kubenswrapper[4712]: I0130 20:22:42.606097 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-k7x2h_818160cb-c862-4860-8549-af66d60827c1/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:42 crc kubenswrapper[4712]: I0130 20:22:42.706992 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-679854b776-gmq67_6c3a1401-04c4-419c-98dc-23ca889b391a/heat-api/0.log" Jan 30 20:22:42 crc kubenswrapper[4712]: I0130 20:22:42.905430 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-64f88d7685-rpkd8_e18788f5-d1c7-435c-a619-784ddb7bdb56/heat-cfnapi/0.log" Jan 30 20:22:42 crc kubenswrapper[4712]: I0130 20:22:42.959746 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496601-zbw5k_b95a6570-ac24-45a6-92c0-41f38a9d71da/keystone-cron/0.log" Jan 30 20:22:43 crc kubenswrapper[4712]: I0130 20:22:43.317895 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496661-zrl9j_7d73a275-c758-43d4-903a-fa746707b66c/keystone-cron/0.log" Jan 30 20:22:43 crc kubenswrapper[4712]: I0130 20:22:43.456383 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29496721-gss79_a9d07708-613e-4ca3-a143-34a7158f2243/keystone-cron/0.log" Jan 30 20:22:43 crc kubenswrapper[4712]: I0130 20:22:43.592964 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_19b27a49-3b3b-434e-b8c7-133e4e120569/kube-state-metrics/0.log" Jan 30 20:22:43 crc kubenswrapper[4712]: I0130 20:22:43.866899 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-5w7jh_3ab30e70-a942-41a5-ba9f-abd8da406691/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:44 crc kubenswrapper[4712]: I0130 20:22:44.223367 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-64655dbc44-pvj2c_6a28b495-ecf0-409e-9558-ee794a46dbd1/horizon-log/0.log" Jan 30 20:22:44 crc kubenswrapper[4712]: I0130 20:22:44.387406 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68974494f7-4p2dn_0e1c8729-f223-4ed3-832d-b0848f7a401d/neutron-httpd/0.log" Jan 30 20:22:44 crc kubenswrapper[4712]: I0130 20:22:44.481909 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-68974494f7-4p2dn_0e1c8729-f223-4ed3-832d-b0848f7a401d/neutron-api/0.log" Jan 30 20:22:44 crc kubenswrapper[4712]: I0130 20:22:44.659299 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-425g6_65da0015-8187-4b28-8d22-d5b12a920288/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:44 crc kubenswrapper[4712]: I0130 20:22:44.677473 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-7f4784f4d6-zvlhq_49aa464e-03ee-4970-bbf8-552e07904ea0/keystone-api/0.log" Jan 30 20:22:45 crc kubenswrapper[4712]: I0130 20:22:45.423090 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_76a0f5cf-d830-475d-bded-4975230ef33a/nova-cell0-conductor-conductor/0.log" Jan 30 20:22:45 crc kubenswrapper[4712]: I0130 20:22:45.698007 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_5ea9e5a8-2988-4fa3-b436-ef58de9d2fa6/nova-cell1-conductor-conductor/0.log" Jan 30 20:22:46 crc kubenswrapper[4712]: I0130 20:22:46.244035 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_6b7f0bd2-aace-43a5-9214-75d73cd3fbe1/nova-cell1-novncproxy-novncproxy/0.log" Jan 30 20:22:46 crc kubenswrapper[4712]: I0130 20:22:46.290230 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-bx7xf_f6ddcc20-4459-4b3a-8539-8fda3da2c415/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:46 crc kubenswrapper[4712]: I0130 20:22:46.603117 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c9df1a77-0933-4439-9ee1-a3f4414eca71/nova-metadata-log/0.log" Jan 30 20:22:46 crc kubenswrapper[4712]: I0130 20:22:46.896938 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fdc1ab7c-d592-4e45-8bbc-1ecc967bad26/nova-api-log/0.log" Jan 30 20:22:47 crc kubenswrapper[4712]: I0130 20:22:47.458563 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0e4667e-8702-43ae-b7b7-1aa930f9a3c3/mysql-bootstrap/0.log" Jan 30 20:22:47 crc kubenswrapper[4712]: I0130 20:22:47.689869 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0e4667e-8702-43ae-b7b7-1aa930f9a3c3/mysql-bootstrap/0.log" Jan 30 20:22:47 crc kubenswrapper[4712]: I0130 20:22:47.860571 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_503aad53-052a-4eab-b8b9-ceb01fda3dc7/nova-scheduler-scheduler/0.log" Jan 30 20:22:47 crc kubenswrapper[4712]: I0130 20:22:47.906145 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0e4667e-8702-43ae-b7b7-1aa930f9a3c3/galera/1.log" Jan 30 20:22:48 crc kubenswrapper[4712]: I0130 20:22:48.173752 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_e0e4667e-8702-43ae-b7b7-1aa930f9a3c3/galera/0.log" Jan 30 20:22:48 crc kubenswrapper[4712]: I0130 20:22:48.460684 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a12f0a95-1db0-4dd9-993c-1413c0fa10b0/mysql-bootstrap/0.log" Jan 30 20:22:48 crc kubenswrapper[4712]: I0130 20:22:48.666955 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_fdc1ab7c-d592-4e45-8bbc-1ecc967bad26/nova-api-api/0.log" Jan 30 20:22:48 crc kubenswrapper[4712]: I0130 20:22:48.676022 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a12f0a95-1db0-4dd9-993c-1413c0fa10b0/mysql-bootstrap/0.log" Jan 30 20:22:48 crc kubenswrapper[4712]: I0130 20:22:48.842031 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a12f0a95-1db0-4dd9-993c-1413c0fa10b0/galera/1.log" Jan 30 20:22:48 crc kubenswrapper[4712]: I0130 20:22:48.947923 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_a12f0a95-1db0-4dd9-993c-1413c0fa10b0/galera/0.log" Jan 30 20:22:49 crc kubenswrapper[4712]: I0130 20:22:49.180507 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_ca2a20bb-6a1a-4d8e-8f87-6478ac901d09/openstackclient/0.log" Jan 30 20:22:49 crc kubenswrapper[4712]: I0130 20:22:49.282927 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-kjztv_4c718c29-458b-43e8-979f-f636b17928e1/openstack-network-exporter/0.log" Jan 30 20:22:49 crc kubenswrapper[4712]: I0130 20:22:49.468961 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfgk4_36067e45-f8de-4952-9372-564e0e9d850e/ovsdb-server-init/0.log" Jan 30 20:22:49 crc kubenswrapper[4712]: I0130 20:22:49.841722 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfgk4_36067e45-f8de-4952-9372-564e0e9d850e/ovsdb-server-init/0.log" Jan 30 20:22:49 crc kubenswrapper[4712]: I0130 20:22:49.855823 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfgk4_36067e45-f8de-4952-9372-564e0e9d850e/ovsdb-server/0.log" Jan 30 20:22:49 crc kubenswrapper[4712]: I0130 20:22:49.857759 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-qfgk4_36067e45-f8de-4952-9372-564e0e9d850e/ovs-vswitchd/0.log" Jan 30 20:22:50 crc kubenswrapper[4712]: I0130 20:22:50.137345 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-sr5tj_ce49eaf1-5cf3-4399-b2c9-c253df2440bd/ovn-controller/0.log" Jan 30 20:22:50 crc kubenswrapper[4712]: I0130 20:22:50.367491 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-dv9r8_aecfda8c-69d9-4b35-8c62-ff6112a3631e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:50 crc kubenswrapper[4712]: I0130 20:22:50.507618 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b017036-bac3-47fb-b6dc-97a3b85af99d/openstack-network-exporter/0.log" Jan 30 20:22:50 crc kubenswrapper[4712]: I0130 20:22:50.585400 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_1b017036-bac3-47fb-b6dc-97a3b85af99d/ovn-northd/0.log" Jan 30 20:22:50 crc kubenswrapper[4712]: I0130 20:22:50.793535 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6820a928-0d59-463e-8d88-aef9b2242388/openstack-network-exporter/0.log" Jan 30 20:22:50 crc kubenswrapper[4712]: I0130 20:22:50.844903 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_6820a928-0d59-463e-8d88-aef9b2242388/ovsdbserver-nb/0.log" Jan 30 20:22:51 crc kubenswrapper[4712]: I0130 20:22:51.078510 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_220f56ca-28d1-4856-98cc-e420bd3cce95/openstack-network-exporter/0.log" Jan 30 20:22:51 crc kubenswrapper[4712]: I0130 20:22:51.112445 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_220f56ca-28d1-4856-98cc-e420bd3cce95/ovsdbserver-sb/0.log" Jan 30 20:22:51 crc kubenswrapper[4712]: I0130 20:22:51.768513 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f7ee8a13-933e-462b-956a-0dae66b09f01/setup-container/0.log" Jan 30 20:22:51 crc kubenswrapper[4712]: I0130 20:22:51.916759 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6ddfd55656-dc4w7_c8347a30-317c-4035-abc4-b03700578363/placement-api/0.log" Jan 30 20:22:51 crc kubenswrapper[4712]: I0130 20:22:51.973699 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f7ee8a13-933e-462b-956a-0dae66b09f01/setup-container/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.014654 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-6ddfd55656-dc4w7_c8347a30-317c-4035-abc4-b03700578363/placement-log/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.168903 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_f7ee8a13-933e-462b-956a-0dae66b09f01/rabbitmq/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.232105 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3dfaa353-4f23-4dab-a7c5-6156924b9350/setup-container/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.498431 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3dfaa353-4f23-4dab-a7c5-6156924b9350/setup-container/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.609765 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3dfaa353-4f23-4dab-a7c5-6156924b9350/rabbitmq/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.830235 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pwptv_943f21b7-5a6e-4b8d-9bcc-a4fb6fde2ce6/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:52 crc kubenswrapper[4712]: I0130 20:22:52.940550 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_c9df1a77-0933-4439-9ee1-a3f4414eca71/nova-metadata-metadata/0.log" Jan 30 20:22:53 crc kubenswrapper[4712]: I0130 20:22:53.123532 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-x2znn_fd818085-3429-43ff-bb05-2aaf3d48dd7b/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:53 crc kubenswrapper[4712]: I0130 20:22:53.151241 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-xtwdf_651b3d64-8c79-4079-ad2c-6a55ce87cd36/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:53 crc kubenswrapper[4712]: I0130 20:22:53.441236 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-8dq8h_91e0c680-dd16-41a4-9a12-59cf6d36151c/ssh-known-hosts-edpm-deployment/0.log" Jan 30 20:22:53 crc kubenswrapper[4712]: I0130 20:22:53.481686 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-7q5g5_0c49e3a4-cabe-47df-aa07-12276d5aa590/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:53 crc kubenswrapper[4712]: I0130 20:22:53.831548 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f9b7fd987-g2xkh_0cad21e9-9d68-4f77-820b-0c1641e81e72/proxy-server/0.log" Jan 30 20:22:53 crc kubenswrapper[4712]: I0130 20:22:53.835691 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-9fw4k_b6cda925-aa9c-401f-90bb-158535201367/swift-ring-rebalance/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.045644 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/account-auditor/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.150733 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/account-reaper/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.372039 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/container-auditor/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.392630 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/account-server/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.411756 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/account-replicator/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.416507 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-7f9b7fd987-g2xkh_0cad21e9-9d68-4f77-820b-0c1641e81e72/proxy-httpd/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.674593 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/container-replicator/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.678006 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/container-server/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.738784 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/object-auditor/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.754010 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/container-updater/0.log" Jan 30 20:22:54 crc kubenswrapper[4712]: I0130 20:22:54.960517 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/object-expirer/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.009871 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/object-server/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.057337 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/object-updater/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.090447 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/object-replicator/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.245903 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/rsync/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.308523 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_b46c7f41-9ce5-4625-98d5-74bafa8bd0de/swift-recon-cron/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.530594 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-sp7hv_96e36eb4-2d2a-4803-a882-ff770ce96ffc/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:55 crc kubenswrapper[4712]: I0130 20:22:55.941777 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-jpzf8_96e8f776-9933-4f80-91dd-fefa02de47ec/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 30 20:22:56 crc kubenswrapper[4712]: I0130 20:22:56.126384 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s01-single-thread-testing_e00d35e2-6792-49c6-b55d-7d7ef6c7611e/tempest-tests-tempest-tests-runner/0.log" Jan 30 20:22:57 crc kubenswrapper[4712]: I0130 20:22:57.152775 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest-s00-multi-thread-testing_eb9570ef-5465-43b3-8747-1d546402c98a/tempest-tests-tempest-tests-runner/0.log" Jan 30 20:23:12 crc kubenswrapper[4712]: I0130 20:23:12.349097 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_9fecd346-f2cb-45fa-be64-6be579acaf56/memcached/0.log" Jan 30 20:23:31 crc kubenswrapper[4712]: I0130 20:23:31.874208 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/util/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.078153 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/util/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.107181 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/pull/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.150672 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/pull/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.357141 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/extract/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.446605 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/pull/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.449457 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_488b31f3850666f759755213b2d3367735e8b7118e0fd5a1c8e4c15b726fq6n_d6f30a7d-fc2e-4274-a5d9-8ff44755d83d/util/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.671608 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-tfxdt_2bc54d51-4f21-479f-a89e-1c60a757433f/manager/0.log" Jan 30 20:23:32 crc kubenswrapper[4712]: I0130 20:23:32.775380 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-xfmvz_e1a1d497-2276-4248-9bca-1c7038430933/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.090359 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-2h4zg_aa03f8a3-9bea-4b56-92ce-27d1fe53840a/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.095219 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-jkjdt_cc62b7c7-5521-41df-bf10-d9cc287fbf7f/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.305994 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-lqxpc_6e263552-c0f6-4f24-879f-79895cdbc953/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.391378 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-xbk9b_5ccbb7b6-e489-4676-8faa-8a0306776a54/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.621354 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-z9d9r_3bfc9890-11b6-4fcf-9458-08dce816b4b9/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.829118 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-lwlhf_7b99459b-9311-4260-be34-3de859c1e0b0/manager/0.log" Jan 30 20:23:33 crc kubenswrapper[4712]: I0130 20:23:33.952677 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-l62x6_d3b1d20e-d20c-40f9-9c2b-314aee2fe51e/manager/0.log" Jan 30 20:23:34 crc kubenswrapper[4712]: I0130 20:23:34.116079 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-2n8cf_957cefd9-5116-40c3-aaf4-67ba58319ca1/manager/0.log" Jan 30 20:23:34 crc kubenswrapper[4712]: I0130 20:23:34.170580 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-wp89m_c8354464-6e92-4961-833a-414efe43db13/manager/0.log" Jan 30 20:23:34 crc kubenswrapper[4712]: I0130 20:23:34.356892 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-7pr55_b3222b74-686d-4b44-b521-33fb24c0b403/manager/0.log" Jan 30 20:23:34 crc kubenswrapper[4712]: I0130 20:23:34.455229 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-kj9k8_1abbe42a-dbb1-4ec5-8318-451adc608b2b/manager/0.log" Jan 30 20:23:34 crc kubenswrapper[4712]: I0130 20:23:34.598464 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-jjb4n_70ad565b-dc4e-4f67-863a-fd29c88ad39d/manager/0.log" Jan 30 20:23:34 crc kubenswrapper[4712]: I0130 20:23:34.669924 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4drtdz2_d4821c16-36e6-43c6-91f1-5fdf29b5b88a/manager/0.log" Jan 30 20:23:35 crc kubenswrapper[4712]: I0130 20:23:35.067746 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-5884d87984-t6bbn_16cf8838-73f4-4b47-a0a5-0258974c49db/operator/0.log" Jan 30 20:23:35 crc kubenswrapper[4712]: I0130 20:23:35.222988 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x5k4p_8610a2e0-98ae-41e2-80a0-c66d693024a0/registry-server/1.log" Jan 30 20:23:35 crc kubenswrapper[4712]: I0130 20:23:35.496845 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-x5k4p_8610a2e0-98ae-41e2-80a0-c66d693024a0/registry-server/0.log" Jan 30 20:23:35 crc kubenswrapper[4712]: I0130 20:23:35.661075 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-smj59_19489158-a72e-4e6d-981a-879b596fb9b8/manager/0.log" Jan 30 20:23:35 crc kubenswrapper[4712]: I0130 20:23:35.995064 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-4l4j7_adbd0e89-e0e3-46eb-b2c5-4482cc71deae/manager/0.log" Jan 30 20:23:36 crc kubenswrapper[4712]: I0130 20:23:36.115954 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-659668d854-w9hqw_15028a9a-8618-4d65-89ff-d8b06f63821f/manager/0.log" Jan 30 20:23:36 crc kubenswrapper[4712]: I0130 20:23:36.127509 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7xzbw_3602a87a-8a49-427b-baf0-a534b10e2d5b/operator/0.log" Jan 30 20:23:36 crc kubenswrapper[4712]: I0130 20:23:36.380746 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-rfmgz_6c041737-6e32-468d-aba7-469207eab526/manager/0.log" Jan 30 20:23:36 crc kubenswrapper[4712]: I0130 20:23:36.441338 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-2x2xt_d37f95a0-af87-4727-83a4-aa6334b0759e/manager/0.log" Jan 30 20:23:36 crc kubenswrapper[4712]: I0130 20:23:36.642837 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-78v95_a1f37d35-d806-4c98-bdc5-85163d1b180c/manager/0.log" Jan 30 20:23:36 crc kubenswrapper[4712]: I0130 20:23:36.697968 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-f4h96_f0e6edc2-9ad5-44a9-8737-78cfd077f9b1/manager/0.log" Jan 30 20:23:57 crc kubenswrapper[4712]: I0130 20:23:57.683960 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-gfwsl_7d104d8e-f081-42a2-997e-4b27951d3e2c/control-plane-machine-set-operator/0.log" Jan 30 20:23:57 crc kubenswrapper[4712]: I0130 20:23:57.861344 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5xwgj_0b4d1852-9507-412e-842e-d9dbd886e79d/kube-rbac-proxy/0.log" Jan 30 20:23:57 crc kubenswrapper[4712]: I0130 20:23:57.915190 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-5xwgj_0b4d1852-9507-412e-842e-d9dbd886e79d/machine-api-operator/0.log" Jan 30 20:24:12 crc kubenswrapper[4712]: I0130 20:24:12.928427 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-z55v5_e2596ab3-5e15-4f02-b27f-36787aa5ebd8/cert-manager-controller/0.log" Jan 30 20:24:13 crc kubenswrapper[4712]: I0130 20:24:13.113812 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-9887h_52a11f64-b007-48ea-943a-0dc87304b75d/cert-manager-cainjector/0.log" Jan 30 20:24:13 crc kubenswrapper[4712]: I0130 20:24:13.174159 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-2xxnh_b8cf7519-5513-43e8-98bb-b81e8d7c65e3/cert-manager-webhook/0.log" Jan 30 20:24:28 crc kubenswrapper[4712]: I0130 20:24:28.100663 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-scx5w_6fff5133-d95e-4817-b21a-0163f1a96240/nmstate-console-plugin/0.log" Jan 30 20:24:28 crc kubenswrapper[4712]: I0130 20:24:28.356348 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-b97c2_e2f3dc74-f154-42cb-83fa-1aa631aac288/nmstate-handler/0.log" Jan 30 20:24:28 crc kubenswrapper[4712]: I0130 20:24:28.396012 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b5cxm_bb0a5cdb-d0e2-446f-b242-d63cfa7fb783/kube-rbac-proxy/0.log" Jan 30 20:24:28 crc kubenswrapper[4712]: I0130 20:24:28.581955 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b5cxm_bb0a5cdb-d0e2-446f-b242-d63cfa7fb783/nmstate-metrics/0.log" Jan 30 20:24:28 crc kubenswrapper[4712]: I0130 20:24:28.617862 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-ngc72_043c21c8-23c1-4c11-b636-b5f34f6aa30b/nmstate-operator/0.log" Jan 30 20:24:28 crc kubenswrapper[4712]: I0130 20:24:28.770029 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-wg6ft_32b6f6bb-fadc-43d5-9046-f2ee1a93d325/nmstate-webhook/0.log" Jan 30 20:24:36 crc kubenswrapper[4712]: I0130 20:24:36.272179 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:24:36 crc kubenswrapper[4712]: I0130 20:24:36.272830 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:24:59 crc kubenswrapper[4712]: I0130 20:24:59.961210 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kr8vp_923ca268-753b-4b59-8c12-9517f5708f65/kube-rbac-proxy/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.047248 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-kr8vp_923ca268-753b-4b59-8c12-9517f5708f65/controller/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.256862 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-frr-files/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.415301 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-reloader/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.435160 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-frr-files/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.435289 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-metrics/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.495272 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-reloader/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.706614 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-frr-files/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.753660 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-metrics/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.762219 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-metrics/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.777789 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-reloader/0.log" Jan 30 20:25:00 crc kubenswrapper[4712]: I0130 20:25:00.973225 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-frr-files/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.024556 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-metrics/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.027637 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/controller/1.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.031207 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/cp-reloader/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.150900 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/controller/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.233326 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/frr-metrics/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.518743 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/kube-rbac-proxy/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.598806 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/kube-rbac-proxy-frr/0.log" Jan 30 20:25:01 crc kubenswrapper[4712]: I0130 20:25:01.758509 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/reloader/0.log" Jan 30 20:25:02 crc kubenswrapper[4712]: I0130 20:25:02.039279 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-vkxrq_055ca335-cbe6-4ef8-af90-fb2d995a3187/frr-k8s-webhook-server/0.log" Jan 30 20:25:02 crc kubenswrapper[4712]: I0130 20:25:02.359277 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-d574845cc-9l79n_5ad57c84-b9da-4613-92e6-0bfe23a14d69/manager/0.log" Jan 30 20:25:02 crc kubenswrapper[4712]: I0130 20:25:02.760668 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58dccfbb96-pxb54_5fe7be15-f524-46c1-ba58-e2d8ccd001c0/webhook-server/1.log" Jan 30 20:25:02 crc kubenswrapper[4712]: I0130 20:25:02.778150 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-58dccfbb96-pxb54_5fe7be15-f524-46c1-ba58-e2d8ccd001c0/webhook-server/0.log" Jan 30 20:25:02 crc kubenswrapper[4712]: I0130 20:25:02.829224 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/frr/1.log" Jan 30 20:25:03 crc kubenswrapper[4712]: I0130 20:25:03.094854 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-j9bpz_7d1e2433-a99b-4b29-8f58-e21a7745d1d9/frr/0.log" Jan 30 20:25:03 crc kubenswrapper[4712]: I0130 20:25:03.166235 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gmjr9_f5e77c2d-c85b-44c7-ae02-074b491daf83/kube-rbac-proxy/0.log" Jan 30 20:25:03 crc kubenswrapper[4712]: I0130 20:25:03.658190 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-gmjr9_f5e77c2d-c85b-44c7-ae02-074b491daf83/speaker/0.log" Jan 30 20:25:04 crc kubenswrapper[4712]: I0130 20:25:04.952232 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-qtvhx"] Jan 30 20:25:04 crc kubenswrapper[4712]: E0130 20:25:04.952832 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f9750b-a249-4cc7-bbeb-7283b44035ce" containerName="container-00" Jan 30 20:25:04 crc kubenswrapper[4712]: I0130 20:25:04.952845 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f9750b-a249-4cc7-bbeb-7283b44035ce" containerName="container-00" Jan 30 20:25:04 crc kubenswrapper[4712]: I0130 20:25:04.953035 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f9750b-a249-4cc7-bbeb-7283b44035ce" containerName="container-00" Jan 30 20:25:04 crc kubenswrapper[4712]: I0130 20:25:04.960714 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:04 crc kubenswrapper[4712]: I0130 20:25:04.966816 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtvhx"] Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.041581 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-utilities\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.041672 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-catalog-content\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.041700 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjjtn\" (UniqueName: \"kubernetes.io/projected/581d967d-4fa4-4451-9328-3e27d63f5a7e-kube-api-access-xjjtn\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.143629 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xjjtn\" (UniqueName: \"kubernetes.io/projected/581d967d-4fa4-4451-9328-3e27d63f5a7e-kube-api-access-xjjtn\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.143771 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-utilities\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.143856 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-catalog-content\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.144265 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-catalog-content\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.144432 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-utilities\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.168594 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xjjtn\" (UniqueName: \"kubernetes.io/projected/581d967d-4fa4-4451-9328-3e27d63f5a7e-kube-api-access-xjjtn\") pod \"community-operators-qtvhx\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:05 crc kubenswrapper[4712]: I0130 20:25:05.277654 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:06 crc kubenswrapper[4712]: I0130 20:25:06.271009 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:25:06 crc kubenswrapper[4712]: I0130 20:25:06.271570 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:25:06 crc kubenswrapper[4712]: I0130 20:25:06.388352 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-qtvhx"] Jan 30 20:25:07 crc kubenswrapper[4712]: I0130 20:25:07.378842 4712 generic.go:334] "Generic (PLEG): container finished" podID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerID="763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a" exitCode=0 Jan 30 20:25:07 crc kubenswrapper[4712]: I0130 20:25:07.378898 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerDied","Data":"763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a"} Jan 30 20:25:07 crc kubenswrapper[4712]: I0130 20:25:07.379169 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerStarted","Data":"e68d2b572bddb32bf3a9d94cb3d045358a2abe6d0eaf67be2db4af88fed956f6"} Jan 30 20:25:08 crc kubenswrapper[4712]: I0130 20:25:08.391695 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerStarted","Data":"147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502"} Jan 30 20:25:10 crc kubenswrapper[4712]: I0130 20:25:10.411172 4712 generic.go:334] "Generic (PLEG): container finished" podID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerID="147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502" exitCode=0 Jan 30 20:25:10 crc kubenswrapper[4712]: I0130 20:25:10.411306 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerDied","Data":"147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502"} Jan 30 20:25:11 crc kubenswrapper[4712]: I0130 20:25:11.422399 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerStarted","Data":"326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5"} Jan 30 20:25:11 crc kubenswrapper[4712]: I0130 20:25:11.504444 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-qtvhx" podStartSLOduration=3.8279112189999998 podStartE2EDuration="7.504386369s" podCreationTimestamp="2026-01-30 20:25:04 +0000 UTC" firstStartedPulling="2026-01-30 20:25:07.38107596 +0000 UTC m=+12644.288085429" lastFinishedPulling="2026-01-30 20:25:11.05755111 +0000 UTC m=+12647.964560579" observedRunningTime="2026-01-30 20:25:11.445744154 +0000 UTC m=+12648.352753623" watchObservedRunningTime="2026-01-30 20:25:11.504386369 +0000 UTC m=+12648.411395838" Jan 30 20:25:15 crc kubenswrapper[4712]: I0130 20:25:15.278639 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:15 crc kubenswrapper[4712]: I0130 20:25:15.281242 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:16 crc kubenswrapper[4712]: I0130 20:25:16.327932 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-qtvhx" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="registry-server" probeResult="failure" output=< Jan 30 20:25:16 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:25:16 crc kubenswrapper[4712]: > Jan 30 20:25:18 crc kubenswrapper[4712]: I0130 20:25:18.461263 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/util/0.log" Jan 30 20:25:18 crc kubenswrapper[4712]: I0130 20:25:18.745126 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/pull/0.log" Jan 30 20:25:18 crc kubenswrapper[4712]: I0130 20:25:18.745155 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/pull/0.log" Jan 30 20:25:18 crc kubenswrapper[4712]: I0130 20:25:18.788312 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/util/0.log" Jan 30 20:25:18 crc kubenswrapper[4712]: I0130 20:25:18.973085 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/util/0.log" Jan 30 20:25:18 crc kubenswrapper[4712]: I0130 20:25:18.973440 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/extract/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.062911 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dc424zh_648c4614-a929-4395-b743-253cde42a583/pull/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.148081 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/util/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.371630 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/pull/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.408132 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/pull/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.419974 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/util/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.549947 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/util/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.612222 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/pull/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.624158 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xf25g_55fa88c7-5d3f-4787-ae79-b4237a68e191/extract/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.784618 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/extract-utilities/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.958125 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/extract-content/0.log" Jan 30 20:25:19 crc kubenswrapper[4712]: I0130 20:25:19.989671 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/extract-content/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.036564 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/extract-utilities/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.191043 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/extract-content/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.192365 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/extract-utilities/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.489104 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/extract-utilities/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.833270 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/extract-content/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.920101 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/extract-content/0.log" Jan 30 20:25:20 crc kubenswrapper[4712]: I0130 20:25:20.940571 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/extract-utilities/0.log" Jan 30 20:25:21 crc kubenswrapper[4712]: I0130 20:25:21.181003 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/extract-content/0.log" Jan 30 20:25:21 crc kubenswrapper[4712]: I0130 20:25:21.369956 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/extract-utilities/0.log" Jan 30 20:25:21 crc kubenswrapper[4712]: I0130 20:25:21.685721 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-bs7pg_eaba725b-6442-4a5b-adc9-16047823dc86/registry-server/0.log" Jan 30 20:25:21 crc kubenswrapper[4712]: I0130 20:25:21.760397 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/extract-utilities/0.log" Jan 30 20:25:21 crc kubenswrapper[4712]: I0130 20:25:21.984223 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/extract-utilities/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.086903 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/extract-content/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.097846 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/extract-content/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.392153 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/extract-utilities/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.412310 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/registry-server/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.483461 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-qtvhx_581d967d-4fa4-4451-9328-3e27d63f5a7e/extract-content/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.717996 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-k4mgv_f757484a-48c2-4b6e-9a6b-1e01fe951ae5/marketplace-operator/1.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.838817 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-k4mgv_f757484a-48c2-4b6e-9a6b-1e01fe951ae5/marketplace-operator/0.log" Jan 30 20:25:22 crc kubenswrapper[4712]: I0130 20:25:22.960423 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/extract-utilities/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.242903 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/extract-utilities/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.371331 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-fp9sk_240ba5c6-eb36-4da8-913a-f2b61d13293b/registry-server/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.394055 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/extract-content/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.395832 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/extract-content/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.655890 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/extract-content/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.680613 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/extract-utilities/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.725598 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/extract-utilities/0.log" Jan 30 20:25:23 crc kubenswrapper[4712]: I0130 20:25:23.986593 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/extract-utilities/0.log" Jan 30 20:25:24 crc kubenswrapper[4712]: I0130 20:25:24.037488 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/extract-content/0.log" Jan 30 20:25:24 crc kubenswrapper[4712]: I0130 20:25:24.053468 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-dnfsb_7fe1585c-9bff-482c-a2b9-ccbb10a11300/registry-server/0.log" Jan 30 20:25:24 crc kubenswrapper[4712]: I0130 20:25:24.111783 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/extract-content/0.log" Jan 30 20:25:24 crc kubenswrapper[4712]: I0130 20:25:24.355643 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/extract-content/0.log" Jan 30 20:25:24 crc kubenswrapper[4712]: I0130 20:25:24.380788 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/extract-utilities/0.log" Jan 30 20:25:24 crc kubenswrapper[4712]: I0130 20:25:24.431641 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/registry-server/1.log" Jan 30 20:25:25 crc kubenswrapper[4712]: I0130 20:25:25.055833 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-zg4sq_36edfc17-99ca-4e05-bf92-d60315860caf/registry-server/2.log" Jan 30 20:25:25 crc kubenswrapper[4712]: I0130 20:25:25.329282 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:25 crc kubenswrapper[4712]: I0130 20:25:25.383616 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:25 crc kubenswrapper[4712]: I0130 20:25:25.567812 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtvhx"] Jan 30 20:25:26 crc kubenswrapper[4712]: I0130 20:25:26.535097 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-qtvhx" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="registry-server" containerID="cri-o://326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5" gracePeriod=2 Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.020768 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.124103 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xjjtn\" (UniqueName: \"kubernetes.io/projected/581d967d-4fa4-4451-9328-3e27d63f5a7e-kube-api-access-xjjtn\") pod \"581d967d-4fa4-4451-9328-3e27d63f5a7e\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.124386 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-catalog-content\") pod \"581d967d-4fa4-4451-9328-3e27d63f5a7e\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.124448 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-utilities\") pod \"581d967d-4fa4-4451-9328-3e27d63f5a7e\" (UID: \"581d967d-4fa4-4451-9328-3e27d63f5a7e\") " Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.125029 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-utilities" (OuterVolumeSpecName: "utilities") pod "581d967d-4fa4-4451-9328-3e27d63f5a7e" (UID: "581d967d-4fa4-4451-9328-3e27d63f5a7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.151851 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581d967d-4fa4-4451-9328-3e27d63f5a7e-kube-api-access-xjjtn" (OuterVolumeSpecName: "kube-api-access-xjjtn") pod "581d967d-4fa4-4451-9328-3e27d63f5a7e" (UID: "581d967d-4fa4-4451-9328-3e27d63f5a7e"). InnerVolumeSpecName "kube-api-access-xjjtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.169436 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "581d967d-4fa4-4451-9328-3e27d63f5a7e" (UID: "581d967d-4fa4-4451-9328-3e27d63f5a7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.226620 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xjjtn\" (UniqueName: \"kubernetes.io/projected/581d967d-4fa4-4451-9328-3e27d63f5a7e-kube-api-access-xjjtn\") on node \"crc\" DevicePath \"\"" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.226670 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.226686 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/581d967d-4fa4-4451-9328-3e27d63f5a7e-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.548150 4712 generic.go:334] "Generic (PLEG): container finished" podID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerID="326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5" exitCode=0 Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.548220 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerDied","Data":"326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5"} Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.548263 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-qtvhx" event={"ID":"581d967d-4fa4-4451-9328-3e27d63f5a7e","Type":"ContainerDied","Data":"e68d2b572bddb32bf3a9d94cb3d045358a2abe6d0eaf67be2db4af88fed956f6"} Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.548297 4712 scope.go:117] "RemoveContainer" containerID="326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.548339 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-qtvhx" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.594896 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-qtvhx"] Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.595978 4712 scope.go:117] "RemoveContainer" containerID="147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.604923 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-qtvhx"] Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.628357 4712 scope.go:117] "RemoveContainer" containerID="763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.667286 4712 scope.go:117] "RemoveContainer" containerID="326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5" Jan 30 20:25:27 crc kubenswrapper[4712]: E0130 20:25:27.667633 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5\": container with ID starting with 326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5 not found: ID does not exist" containerID="326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.667684 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5"} err="failed to get container status \"326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5\": rpc error: code = NotFound desc = could not find container \"326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5\": container with ID starting with 326072b6b827bc17070b1df3fde936cfcba26491925cc1fafc8faa2619fe31e5 not found: ID does not exist" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.667706 4712 scope.go:117] "RemoveContainer" containerID="147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502" Jan 30 20:25:27 crc kubenswrapper[4712]: E0130 20:25:27.668255 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502\": container with ID starting with 147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502 not found: ID does not exist" containerID="147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.668320 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502"} err="failed to get container status \"147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502\": rpc error: code = NotFound desc = could not find container \"147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502\": container with ID starting with 147f937ac108f4b1e4922cf753a6e713c2c97fe09062b383ec7fb57e635e4502 not found: ID does not exist" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.668352 4712 scope.go:117] "RemoveContainer" containerID="763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a" Jan 30 20:25:27 crc kubenswrapper[4712]: E0130 20:25:27.668701 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a\": container with ID starting with 763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a not found: ID does not exist" containerID="763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.668722 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a"} err="failed to get container status \"763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a\": rpc error: code = NotFound desc = could not find container \"763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a\": container with ID starting with 763217327b0f554d8a280a53b08e30bb434757e86ca771ec9998f1d3b2547c0a not found: ID does not exist" Jan 30 20:25:27 crc kubenswrapper[4712]: I0130 20:25:27.814484 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" path="/var/lib/kubelet/pods/581d967d-4fa4-4451-9328-3e27d63f5a7e/volumes" Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.271587 4712 patch_prober.go:28] interesting pod/machine-config-daemon-dwnd7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.272231 4712 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.272276 4712 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.273219 4712 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569"} pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.273276 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" containerName="machine-config-daemon" containerID="cri-o://956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" gracePeriod=600 Jan 30 20:25:36 crc kubenswrapper[4712]: E0130 20:25:36.393934 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.651573 4712 generic.go:334] "Generic (PLEG): container finished" podID="75ff6334-72a0-4748-bba6-0efb493c8033" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" exitCode=0 Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.651621 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerDied","Data":"956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569"} Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.651970 4712 scope.go:117] "RemoveContainer" containerID="3a53d063c3fa0b150219f6b9873fe223b8e25867e44a92b5caea68f5711f9622" Jan 30 20:25:36 crc kubenswrapper[4712]: I0130 20:25:36.652705 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:25:36 crc kubenswrapper[4712]: E0130 20:25:36.653032 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:25:49 crc kubenswrapper[4712]: I0130 20:25:49.799599 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:25:49 crc kubenswrapper[4712]: E0130 20:25:49.800388 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:26:01 crc kubenswrapper[4712]: I0130 20:26:01.799156 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:26:01 crc kubenswrapper[4712]: E0130 20:26:01.799975 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:26:14 crc kubenswrapper[4712]: I0130 20:26:14.799725 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:26:14 crc kubenswrapper[4712]: E0130 20:26:14.800658 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:26:25 crc kubenswrapper[4712]: I0130 20:26:25.802224 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:26:25 crc kubenswrapper[4712]: E0130 20:26:25.803167 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.411948 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5d6sj"] Jan 30 20:26:36 crc kubenswrapper[4712]: E0130 20:26:36.412729 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="extract-content" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.412740 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="extract-content" Jan 30 20:26:36 crc kubenswrapper[4712]: E0130 20:26:36.412784 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="registry-server" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.412790 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="registry-server" Jan 30 20:26:36 crc kubenswrapper[4712]: E0130 20:26:36.412806 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="extract-utilities" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.412988 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="extract-utilities" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.413191 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="581d967d-4fa4-4451-9328-3e27d63f5a7e" containerName="registry-server" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.414436 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.436481 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5d6sj"] Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.487613 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-catalog-content\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.487713 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-utilities\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.487745 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkm88\" (UniqueName: \"kubernetes.io/projected/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-kube-api-access-jkm88\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.589176 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-utilities\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.589241 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkm88\" (UniqueName: \"kubernetes.io/projected/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-kube-api-access-jkm88\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.589378 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-catalog-content\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.589657 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-utilities\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.589899 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-catalog-content\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.617631 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkm88\" (UniqueName: \"kubernetes.io/projected/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-kube-api-access-jkm88\") pod \"redhat-operators-5d6sj\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:36 crc kubenswrapper[4712]: I0130 20:26:36.730771 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:37 crc kubenswrapper[4712]: I0130 20:26:37.281831 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5d6sj"] Jan 30 20:26:37 crc kubenswrapper[4712]: I0130 20:26:37.324528 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerStarted","Data":"95fddbff115e42288fd41ca823ec85de4a102368ad505ef212848259379ddda7"} Jan 30 20:26:38 crc kubenswrapper[4712]: I0130 20:26:38.338204 4712 generic.go:334] "Generic (PLEG): container finished" podID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerID="66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848" exitCode=0 Jan 30 20:26:38 crc kubenswrapper[4712]: I0130 20:26:38.338568 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerDied","Data":"66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848"} Jan 30 20:26:38 crc kubenswrapper[4712]: I0130 20:26:38.344581 4712 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 30 20:26:38 crc kubenswrapper[4712]: I0130 20:26:38.799936 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:26:38 crc kubenswrapper[4712]: E0130 20:26:38.800499 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:26:40 crc kubenswrapper[4712]: I0130 20:26:40.361571 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerStarted","Data":"a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a"} Jan 30 20:26:45 crc kubenswrapper[4712]: I0130 20:26:45.406377 4712 generic.go:334] "Generic (PLEG): container finished" podID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerID="a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a" exitCode=0 Jan 30 20:26:45 crc kubenswrapper[4712]: I0130 20:26:45.406436 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerDied","Data":"a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a"} Jan 30 20:26:46 crc kubenswrapper[4712]: I0130 20:26:46.457168 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerStarted","Data":"b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f"} Jan 30 20:26:46 crc kubenswrapper[4712]: I0130 20:26:46.485131 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5d6sj" podStartSLOduration=2.997345305 podStartE2EDuration="10.485111627s" podCreationTimestamp="2026-01-30 20:26:36 +0000 UTC" firstStartedPulling="2026-01-30 20:26:38.339997337 +0000 UTC m=+12735.247006816" lastFinishedPulling="2026-01-30 20:26:45.827763669 +0000 UTC m=+12742.734773138" observedRunningTime="2026-01-30 20:26:46.483237711 +0000 UTC m=+12743.390247180" watchObservedRunningTime="2026-01-30 20:26:46.485111627 +0000 UTC m=+12743.392121106" Jan 30 20:26:46 crc kubenswrapper[4712]: I0130 20:26:46.731659 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:46 crc kubenswrapper[4712]: I0130 20:26:46.731696 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:26:47 crc kubenswrapper[4712]: I0130 20:26:47.782358 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5d6sj" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" probeResult="failure" output=< Jan 30 20:26:47 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:26:47 crc kubenswrapper[4712]: > Jan 30 20:26:50 crc kubenswrapper[4712]: I0130 20:26:50.799321 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:26:50 crc kubenswrapper[4712]: E0130 20:26:50.800119 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:26:57 crc kubenswrapper[4712]: I0130 20:26:57.808345 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5d6sj" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" probeResult="failure" output=< Jan 30 20:26:57 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:26:57 crc kubenswrapper[4712]: > Jan 30 20:27:03 crc kubenswrapper[4712]: I0130 20:27:03.812499 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:27:03 crc kubenswrapper[4712]: E0130 20:27:03.813766 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:27:07 crc kubenswrapper[4712]: I0130 20:27:07.802525 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5d6sj" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" probeResult="failure" output=< Jan 30 20:27:07 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:27:07 crc kubenswrapper[4712]: > Jan 30 20:27:17 crc kubenswrapper[4712]: I0130 20:27:17.794255 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5d6sj" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" probeResult="failure" output=< Jan 30 20:27:17 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:27:17 crc kubenswrapper[4712]: > Jan 30 20:27:17 crc kubenswrapper[4712]: I0130 20:27:17.802569 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:27:17 crc kubenswrapper[4712]: E0130 20:27:17.802973 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:27:26 crc kubenswrapper[4712]: I0130 20:27:26.829893 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:27:26 crc kubenswrapper[4712]: I0130 20:27:26.901436 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:27:27 crc kubenswrapper[4712]: I0130 20:27:27.074929 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5d6sj"] Jan 30 20:27:28 crc kubenswrapper[4712]: I0130 20:27:28.092909 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5d6sj" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" containerID="cri-o://b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f" gracePeriod=2 Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.022005 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.104371 4712 generic.go:334] "Generic (PLEG): container finished" podID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerID="b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f" exitCode=0 Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.104415 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerDied","Data":"b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f"} Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.104426 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5d6sj" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.104449 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5d6sj" event={"ID":"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3","Type":"ContainerDied","Data":"95fddbff115e42288fd41ca823ec85de4a102368ad505ef212848259379ddda7"} Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.104486 4712 scope.go:117] "RemoveContainer" containerID="b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.107545 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-catalog-content\") pod \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.107835 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkm88\" (UniqueName: \"kubernetes.io/projected/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-kube-api-access-jkm88\") pod \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.107946 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-utilities\") pod \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\" (UID: \"f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3\") " Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.108825 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-utilities" (OuterVolumeSpecName: "utilities") pod "f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" (UID: "f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.126974 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-kube-api-access-jkm88" (OuterVolumeSpecName: "kube-api-access-jkm88") pod "f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" (UID: "f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3"). InnerVolumeSpecName "kube-api-access-jkm88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.133077 4712 scope.go:117] "RemoveContainer" containerID="a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.178012 4712 scope.go:117] "RemoveContainer" containerID="66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.210369 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" (UID: "f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.211439 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.211468 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.211480 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkm88\" (UniqueName: \"kubernetes.io/projected/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3-kube-api-access-jkm88\") on node \"crc\" DevicePath \"\"" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.216057 4712 scope.go:117] "RemoveContainer" containerID="b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f" Jan 30 20:27:29 crc kubenswrapper[4712]: E0130 20:27:29.219388 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f\": container with ID starting with b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f not found: ID does not exist" containerID="b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.220183 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f"} err="failed to get container status \"b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f\": rpc error: code = NotFound desc = could not find container \"b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f\": container with ID starting with b916229719f7705d118c0c4a2ae57566e9fa292f06acd21c671e678ec8ba9e0f not found: ID does not exist" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.220237 4712 scope.go:117] "RemoveContainer" containerID="a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a" Jan 30 20:27:29 crc kubenswrapper[4712]: E0130 20:27:29.220775 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a\": container with ID starting with a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a not found: ID does not exist" containerID="a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.220815 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a"} err="failed to get container status \"a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a\": rpc error: code = NotFound desc = could not find container \"a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a\": container with ID starting with a3295e9d623dcd3b817ea569d596ffb0590f65944f61926ce9a8b362cebf513a not found: ID does not exist" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.220836 4712 scope.go:117] "RemoveContainer" containerID="66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848" Jan 30 20:27:29 crc kubenswrapper[4712]: E0130 20:27:29.221076 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848\": container with ID starting with 66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848 not found: ID does not exist" containerID="66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.221095 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848"} err="failed to get container status \"66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848\": rpc error: code = NotFound desc = could not find container \"66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848\": container with ID starting with 66f46c7e9d5dc1ff18b78eb18f33c8845dc41dca8392ae158de4ec5a60e48848 not found: ID does not exist" Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.459275 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5d6sj"] Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.475596 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5d6sj"] Jan 30 20:27:29 crc kubenswrapper[4712]: I0130 20:27:29.811253 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" path="/var/lib/kubelet/pods/f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3/volumes" Jan 30 20:27:32 crc kubenswrapper[4712]: I0130 20:27:32.801863 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:27:32 crc kubenswrapper[4712]: E0130 20:27:32.802720 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:27:45 crc kubenswrapper[4712]: I0130 20:27:45.799400 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:27:45 crc kubenswrapper[4712]: E0130 20:27:45.800206 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:27:57 crc kubenswrapper[4712]: I0130 20:27:57.799577 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:27:57 crc kubenswrapper[4712]: E0130 20:27:57.801372 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:28:07 crc kubenswrapper[4712]: I0130 20:28:07.240347 4712 scope.go:117] "RemoveContainer" containerID="39ffc6d82568303bc13877099f7dfefcb94199edce61bac05bc8bba2acfff1a5" Jan 30 20:28:08 crc kubenswrapper[4712]: I0130 20:28:08.567067 4712 generic.go:334] "Generic (PLEG): container finished" podID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerID="b012278c0e412214fc24b739970485e6a7d2670a7dae92b04f20dfc58ff8d016" exitCode=0 Jan 30 20:28:08 crc kubenswrapper[4712]: I0130 20:28:08.567264 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-kcw26/must-gather-tgzqk" event={"ID":"222ccb2d-5a6d-4378-a07a-996aed6ec5a8","Type":"ContainerDied","Data":"b012278c0e412214fc24b739970485e6a7d2670a7dae92b04f20dfc58ff8d016"} Jan 30 20:28:08 crc kubenswrapper[4712]: I0130 20:28:08.569232 4712 scope.go:117] "RemoveContainer" containerID="b012278c0e412214fc24b739970485e6a7d2670a7dae92b04f20dfc58ff8d016" Jan 30 20:28:09 crc kubenswrapper[4712]: I0130 20:28:09.031095 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kcw26_must-gather-tgzqk_222ccb2d-5a6d-4378-a07a-996aed6ec5a8/gather/0.log" Jan 30 20:28:12 crc kubenswrapper[4712]: I0130 20:28:12.799843 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:28:12 crc kubenswrapper[4712]: E0130 20:28:12.800576 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:28:22 crc kubenswrapper[4712]: I0130 20:28:22.394234 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-kcw26/must-gather-tgzqk"] Jan 30 20:28:22 crc kubenswrapper[4712]: I0130 20:28:22.396540 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-kcw26/must-gather-tgzqk" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="copy" containerID="cri-o://bc735a24fec9129d6a15e5c02cd0e6fea7aeb70a4a9003cfce8a07e5d41d9986" gracePeriod=2 Jan 30 20:28:22 crc kubenswrapper[4712]: I0130 20:28:22.407505 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-kcw26/must-gather-tgzqk"] Jan 30 20:28:22 crc kubenswrapper[4712]: I0130 20:28:22.868483 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kcw26_must-gather-tgzqk_222ccb2d-5a6d-4378-a07a-996aed6ec5a8/copy/0.log" Jan 30 20:28:22 crc kubenswrapper[4712]: I0130 20:28:22.870166 4712 generic.go:334] "Generic (PLEG): container finished" podID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerID="bc735a24fec9129d6a15e5c02cd0e6fea7aeb70a4a9003cfce8a07e5d41d9986" exitCode=143 Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.298020 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kcw26_must-gather-tgzqk_222ccb2d-5a6d-4378-a07a-996aed6ec5a8/copy/0.log" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.298724 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.467660 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-must-gather-output\") pod \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.467888 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2rmg\" (UniqueName: \"kubernetes.io/projected/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-kube-api-access-d2rmg\") pod \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\" (UID: \"222ccb2d-5a6d-4378-a07a-996aed6ec5a8\") " Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.476104 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-kube-api-access-d2rmg" (OuterVolumeSpecName: "kube-api-access-d2rmg") pod "222ccb2d-5a6d-4378-a07a-996aed6ec5a8" (UID: "222ccb2d-5a6d-4378-a07a-996aed6ec5a8"). InnerVolumeSpecName "kube-api-access-d2rmg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.570075 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2rmg\" (UniqueName: \"kubernetes.io/projected/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-kube-api-access-d2rmg\") on node \"crc\" DevicePath \"\"" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.602659 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "222ccb2d-5a6d-4378-a07a-996aed6ec5a8" (UID: "222ccb2d-5a6d-4378-a07a-996aed6ec5a8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.671946 4712 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/222ccb2d-5a6d-4378-a07a-996aed6ec5a8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.846040 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" path="/var/lib/kubelet/pods/222ccb2d-5a6d-4378-a07a-996aed6ec5a8/volumes" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.887109 4712 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-kcw26_must-gather-tgzqk_222ccb2d-5a6d-4378-a07a-996aed6ec5a8/copy/0.log" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.888358 4712 scope.go:117] "RemoveContainer" containerID="bc735a24fec9129d6a15e5c02cd0e6fea7aeb70a4a9003cfce8a07e5d41d9986" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.888714 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-kcw26/must-gather-tgzqk" Jan 30 20:28:23 crc kubenswrapper[4712]: I0130 20:28:23.940970 4712 scope.go:117] "RemoveContainer" containerID="b012278c0e412214fc24b739970485e6a7d2670a7dae92b04f20dfc58ff8d016" Jan 30 20:28:26 crc kubenswrapper[4712]: I0130 20:28:26.800915 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:28:26 crc kubenswrapper[4712]: E0130 20:28:26.802066 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:28:38 crc kubenswrapper[4712]: I0130 20:28:38.800018 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:28:38 crc kubenswrapper[4712]: E0130 20:28:38.800675 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:28:51 crc kubenswrapper[4712]: I0130 20:28:51.800143 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:28:51 crc kubenswrapper[4712]: E0130 20:28:51.802413 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:29:03 crc kubenswrapper[4712]: I0130 20:29:03.812098 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:29:03 crc kubenswrapper[4712]: E0130 20:29:03.812734 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:29:07 crc kubenswrapper[4712]: I0130 20:29:07.340541 4712 scope.go:117] "RemoveContainer" containerID="235d0d779afaf68ae0c0f7629142b32701ddd2b3741089acf5d702235273313c" Jan 30 20:29:14 crc kubenswrapper[4712]: I0130 20:29:14.814771 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:29:14 crc kubenswrapper[4712]: E0130 20:29:14.815617 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:29:26 crc kubenswrapper[4712]: I0130 20:29:26.801623 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:29:26 crc kubenswrapper[4712]: E0130 20:29:26.802652 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:29:39 crc kubenswrapper[4712]: I0130 20:29:39.800065 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:29:39 crc kubenswrapper[4712]: E0130 20:29:39.800863 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:29:52 crc kubenswrapper[4712]: I0130 20:29:52.800407 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:29:52 crc kubenswrapper[4712]: E0130 20:29:52.801332 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.333432 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8"] Jan 30 20:30:00 crc kubenswrapper[4712]: E0130 20:30:00.339891 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="extract-utilities" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.339928 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="extract-utilities" Jan 30 20:30:00 crc kubenswrapper[4712]: E0130 20:30:00.339946 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="copy" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.339955 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="copy" Jan 30 20:30:00 crc kubenswrapper[4712]: E0130 20:30:00.339975 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="gather" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.339983 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="gather" Jan 30 20:30:00 crc kubenswrapper[4712]: E0130 20:30:00.339995 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="extract-content" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.340003 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="extract-content" Jan 30 20:30:00 crc kubenswrapper[4712]: E0130 20:30:00.340031 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.340039 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.342455 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="copy" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.342497 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3e988fb-bf2a-41e1-8e9d-2b11a5f387c3" containerName="registry-server" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.342516 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="222ccb2d-5a6d-4378-a07a-996aed6ec5a8" containerName="gather" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.346918 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.359779 4712 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.365357 4712 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.480568 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8"] Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.493995 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424c2db3-eb26-4e09-aeb1-de6d60228b74-secret-volume\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.494071 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-df7sv\" (UniqueName: \"kubernetes.io/projected/424c2db3-eb26-4e09-aeb1-de6d60228b74-kube-api-access-df7sv\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.494700 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424c2db3-eb26-4e09-aeb1-de6d60228b74-config-volume\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.597149 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424c2db3-eb26-4e09-aeb1-de6d60228b74-secret-volume\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.597235 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-df7sv\" (UniqueName: \"kubernetes.io/projected/424c2db3-eb26-4e09-aeb1-de6d60228b74-kube-api-access-df7sv\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.597433 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424c2db3-eb26-4e09-aeb1-de6d60228b74-config-volume\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.604431 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424c2db3-eb26-4e09-aeb1-de6d60228b74-config-volume\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.618424 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424c2db3-eb26-4e09-aeb1-de6d60228b74-secret-volume\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.624867 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-df7sv\" (UniqueName: \"kubernetes.io/projected/424c2db3-eb26-4e09-aeb1-de6d60228b74-kube-api-access-df7sv\") pod \"collect-profiles-29496750-jrsm8\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:00 crc kubenswrapper[4712]: I0130 20:30:00.678756 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:01 crc kubenswrapper[4712]: I0130 20:30:01.460686 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8"] Jan 30 20:30:01 crc kubenswrapper[4712]: W0130 20:30:01.479715 4712 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod424c2db3_eb26_4e09_aeb1_de6d60228b74.slice/crio-8dba6dba940e7df5dcd34b6539cb0c292c24fad7202ac1455e470de30674da85 WatchSource:0}: Error finding container 8dba6dba940e7df5dcd34b6539cb0c292c24fad7202ac1455e470de30674da85: Status 404 returned error can't find the container with id 8dba6dba940e7df5dcd34b6539cb0c292c24fad7202ac1455e470de30674da85 Jan 30 20:30:01 crc kubenswrapper[4712]: I0130 20:30:01.917767 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" event={"ID":"424c2db3-eb26-4e09-aeb1-de6d60228b74","Type":"ContainerStarted","Data":"957b4011a11cfdda8f5e0ed1c6b4e2397b7bef9178d4370b5e8aec013fedef07"} Jan 30 20:30:01 crc kubenswrapper[4712]: I0130 20:30:01.918212 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" event={"ID":"424c2db3-eb26-4e09-aeb1-de6d60228b74","Type":"ContainerStarted","Data":"8dba6dba940e7df5dcd34b6539cb0c292c24fad7202ac1455e470de30674da85"} Jan 30 20:30:01 crc kubenswrapper[4712]: I0130 20:30:01.945210 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" podStartSLOduration=1.943975913 podStartE2EDuration="1.943975913s" podCreationTimestamp="2026-01-30 20:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-30 20:30:01.935383086 +0000 UTC m=+12938.842392565" watchObservedRunningTime="2026-01-30 20:30:01.943975913 +0000 UTC m=+12938.850985412" Jan 30 20:30:02 crc kubenswrapper[4712]: I0130 20:30:02.929067 4712 generic.go:334] "Generic (PLEG): container finished" podID="424c2db3-eb26-4e09-aeb1-de6d60228b74" containerID="957b4011a11cfdda8f5e0ed1c6b4e2397b7bef9178d4370b5e8aec013fedef07" exitCode=0 Jan 30 20:30:02 crc kubenswrapper[4712]: I0130 20:30:02.929161 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" event={"ID":"424c2db3-eb26-4e09-aeb1-de6d60228b74","Type":"ContainerDied","Data":"957b4011a11cfdda8f5e0ed1c6b4e2397b7bef9178d4370b5e8aec013fedef07"} Jan 30 20:30:03 crc kubenswrapper[4712]: I0130 20:30:03.821643 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:30:03 crc kubenswrapper[4712]: E0130 20:30:03.822122 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.559291 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.602892 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-df7sv\" (UniqueName: \"kubernetes.io/projected/424c2db3-eb26-4e09-aeb1-de6d60228b74-kube-api-access-df7sv\") pod \"424c2db3-eb26-4e09-aeb1-de6d60228b74\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.603877 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/424c2db3-eb26-4e09-aeb1-de6d60228b74-config-volume" (OuterVolumeSpecName: "config-volume") pod "424c2db3-eb26-4e09-aeb1-de6d60228b74" (UID: "424c2db3-eb26-4e09-aeb1-de6d60228b74"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.602939 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424c2db3-eb26-4e09-aeb1-de6d60228b74-config-volume\") pod \"424c2db3-eb26-4e09-aeb1-de6d60228b74\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.604149 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424c2db3-eb26-4e09-aeb1-de6d60228b74-secret-volume\") pod \"424c2db3-eb26-4e09-aeb1-de6d60228b74\" (UID: \"424c2db3-eb26-4e09-aeb1-de6d60228b74\") " Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.605209 4712 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/424c2db3-eb26-4e09-aeb1-de6d60228b74-config-volume\") on node \"crc\" DevicePath \"\"" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.618961 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/424c2db3-eb26-4e09-aeb1-de6d60228b74-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "424c2db3-eb26-4e09-aeb1-de6d60228b74" (UID: "424c2db3-eb26-4e09-aeb1-de6d60228b74"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.619841 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/424c2db3-eb26-4e09-aeb1-de6d60228b74-kube-api-access-df7sv" (OuterVolumeSpecName: "kube-api-access-df7sv") pod "424c2db3-eb26-4e09-aeb1-de6d60228b74" (UID: "424c2db3-eb26-4e09-aeb1-de6d60228b74"). InnerVolumeSpecName "kube-api-access-df7sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.706691 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-df7sv\" (UniqueName: \"kubernetes.io/projected/424c2db3-eb26-4e09-aeb1-de6d60228b74-kube-api-access-df7sv\") on node \"crc\" DevicePath \"\"" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.706724 4712 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/424c2db3-eb26-4e09-aeb1-de6d60228b74-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.950632 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" event={"ID":"424c2db3-eb26-4e09-aeb1-de6d60228b74","Type":"ContainerDied","Data":"8dba6dba940e7df5dcd34b6539cb0c292c24fad7202ac1455e470de30674da85"} Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.951489 4712 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dba6dba940e7df5dcd34b6539cb0c292c24fad7202ac1455e470de30674da85" Jan 30 20:30:04 crc kubenswrapper[4712]: I0130 20:30:04.950696 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29496750-jrsm8" Jan 30 20:30:05 crc kubenswrapper[4712]: I0130 20:30:05.662476 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt"] Jan 30 20:30:05 crc kubenswrapper[4712]: I0130 20:30:05.671823 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29496705-hnzzt"] Jan 30 20:30:05 crc kubenswrapper[4712]: I0130 20:30:05.820461 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20037386-6f8b-4998-ba1d-25a993410f6b" path="/var/lib/kubelet/pods/20037386-6f8b-4998-ba1d-25a993410f6b/volumes" Jan 30 20:30:07 crc kubenswrapper[4712]: I0130 20:30:07.525405 4712 scope.go:117] "RemoveContainer" containerID="a692591e809d643e500bb66a726b0c6e98ff6ae17ceeff131e822a521a1009bb" Jan 30 20:30:16 crc kubenswrapper[4712]: I0130 20:30:16.799770 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:30:16 crc kubenswrapper[4712]: E0130 20:30:16.800545 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:30:28 crc kubenswrapper[4712]: I0130 20:30:28.799749 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:30:28 crc kubenswrapper[4712]: E0130 20:30:28.802792 4712 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-dwnd7_openshift-machine-config-operator(75ff6334-72a0-4748-bba6-0efb493c8033)\"" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" podUID="75ff6334-72a0-4748-bba6-0efb493c8033" Jan 30 20:30:43 crc kubenswrapper[4712]: I0130 20:30:43.808900 4712 scope.go:117] "RemoveContainer" containerID="956cd74bad97560e54c2d21d82978adb69c71800f569351658a2fa947794e569" Jan 30 20:30:44 crc kubenswrapper[4712]: I0130 20:30:44.371690 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-dwnd7" event={"ID":"75ff6334-72a0-4748-bba6-0efb493c8033","Type":"ContainerStarted","Data":"6251e5ecf58b8577892227be667a1d6eed98deb9992813653030cdaa266a4917"} Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.857456 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-dsq9l"] Jan 30 20:30:52 crc kubenswrapper[4712]: E0130 20:30:52.858521 4712 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="424c2db3-eb26-4e09-aeb1-de6d60228b74" containerName="collect-profiles" Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.858538 4712 state_mem.go:107] "Deleted CPUSet assignment" podUID="424c2db3-eb26-4e09-aeb1-de6d60228b74" containerName="collect-profiles" Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.858760 4712 memory_manager.go:354] "RemoveStaleState removing state" podUID="424c2db3-eb26-4e09-aeb1-de6d60228b74" containerName="collect-profiles" Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.860485 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.876051 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsq9l"] Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.978856 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqzs9\" (UniqueName: \"kubernetes.io/projected/b09a0986-54de-4b9f-9e77-1f12e769f1a6-kube-api-access-gqzs9\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.979029 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-catalog-content\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:52 crc kubenswrapper[4712]: I0130 20:30:52.979064 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-utilities\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.080364 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-catalog-content\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.080411 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-utilities\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.080468 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqzs9\" (UniqueName: \"kubernetes.io/projected/b09a0986-54de-4b9f-9e77-1f12e769f1a6-kube-api-access-gqzs9\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.088871 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-catalog-content\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.090205 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-utilities\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.114262 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqzs9\" (UniqueName: \"kubernetes.io/projected/b09a0986-54de-4b9f-9e77-1f12e769f1a6-kube-api-access-gqzs9\") pod \"certified-operators-dsq9l\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:53 crc kubenswrapper[4712]: I0130 20:30:53.236107 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:30:54 crc kubenswrapper[4712]: I0130 20:30:54.178703 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-dsq9l"] Jan 30 20:30:54 crc kubenswrapper[4712]: I0130 20:30:54.468070 4712 generic.go:334] "Generic (PLEG): container finished" podID="b09a0986-54de-4b9f-9e77-1f12e769f1a6" containerID="a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db" exitCode=0 Jan 30 20:30:54 crc kubenswrapper[4712]: I0130 20:30:54.468117 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerDied","Data":"a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db"} Jan 30 20:30:54 crc kubenswrapper[4712]: I0130 20:30:54.468555 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerStarted","Data":"e300b574cf952a8782391b00b50c83ca8cb4eea39880d040d3d578dab9d9b43c"} Jan 30 20:30:55 crc kubenswrapper[4712]: I0130 20:30:55.478676 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerStarted","Data":"976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07"} Jan 30 20:30:55 crc kubenswrapper[4712]: I0130 20:30:55.850240 4712 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p7c6p"] Jan 30 20:30:55 crc kubenswrapper[4712]: I0130 20:30:55.861173 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:55 crc kubenswrapper[4712]: I0130 20:30:55.900898 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7c6p"] Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.039094 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-utilities\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.039672 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-catalog-content\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.040015 4712 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h72hn\" (UniqueName: \"kubernetes.io/projected/66860bbd-60f5-4ee8-9993-f50afced0c11-kube-api-access-h72hn\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.141766 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-catalog-content\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.141843 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h72hn\" (UniqueName: \"kubernetes.io/projected/66860bbd-60f5-4ee8-9993-f50afced0c11-kube-api-access-h72hn\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.141879 4712 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-utilities\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.142409 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-utilities\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.142414 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-catalog-content\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.167398 4712 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h72hn\" (UniqueName: \"kubernetes.io/projected/66860bbd-60f5-4ee8-9993-f50afced0c11-kube-api-access-h72hn\") pod \"redhat-marketplace-p7c6p\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.186660 4712 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:30:56 crc kubenswrapper[4712]: I0130 20:30:56.740433 4712 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7c6p"] Jan 30 20:30:57 crc kubenswrapper[4712]: I0130 20:30:57.511547 4712 generic.go:334] "Generic (PLEG): container finished" podID="66860bbd-60f5-4ee8-9993-f50afced0c11" containerID="24716c4dbaecd2bdbe6abf7b2ebbadc39bd470dafea251ed6a4e7ce61dce2cd8" exitCode=0 Jan 30 20:30:57 crc kubenswrapper[4712]: I0130 20:30:57.511672 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerDied","Data":"24716c4dbaecd2bdbe6abf7b2ebbadc39bd470dafea251ed6a4e7ce61dce2cd8"} Jan 30 20:30:57 crc kubenswrapper[4712]: I0130 20:30:57.511879 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerStarted","Data":"347962b3eca0e187fc12475a4c71025c30d34866a3543b6be25b7e24fd264576"} Jan 30 20:30:57 crc kubenswrapper[4712]: I0130 20:30:57.514140 4712 generic.go:334] "Generic (PLEG): container finished" podID="b09a0986-54de-4b9f-9e77-1f12e769f1a6" containerID="976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07" exitCode=0 Jan 30 20:30:57 crc kubenswrapper[4712]: I0130 20:30:57.514164 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerDied","Data":"976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07"} Jan 30 20:30:58 crc kubenswrapper[4712]: I0130 20:30:58.524543 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerStarted","Data":"bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68"} Jan 30 20:30:58 crc kubenswrapper[4712]: I0130 20:30:58.630129 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-dsq9l" podStartSLOduration=3.089031572 podStartE2EDuration="6.54126188s" podCreationTimestamp="2026-01-30 20:30:52 +0000 UTC" firstStartedPulling="2026-01-30 20:30:54.469858342 +0000 UTC m=+12991.376867811" lastFinishedPulling="2026-01-30 20:30:57.92208863 +0000 UTC m=+12994.829098119" observedRunningTime="2026-01-30 20:30:58.5391636 +0000 UTC m=+12995.446173079" watchObservedRunningTime="2026-01-30 20:30:58.54126188 +0000 UTC m=+12995.448271359" Jan 30 20:30:59 crc kubenswrapper[4712]: I0130 20:30:59.535769 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerStarted","Data":"6002efdd2e7d758e4de87823c94b7f87639ed44e8d0da39836119abff6e336f4"} Jan 30 20:31:00 crc kubenswrapper[4712]: I0130 20:31:00.546029 4712 generic.go:334] "Generic (PLEG): container finished" podID="66860bbd-60f5-4ee8-9993-f50afced0c11" containerID="6002efdd2e7d758e4de87823c94b7f87639ed44e8d0da39836119abff6e336f4" exitCode=0 Jan 30 20:31:00 crc kubenswrapper[4712]: I0130 20:31:00.546304 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerDied","Data":"6002efdd2e7d758e4de87823c94b7f87639ed44e8d0da39836119abff6e336f4"} Jan 30 20:31:01 crc kubenswrapper[4712]: I0130 20:31:01.557709 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerStarted","Data":"34c657f969e49f69bbadfa5c34913a9e48a7a772322e16d56417196828ee356e"} Jan 30 20:31:01 crc kubenswrapper[4712]: I0130 20:31:01.591621 4712 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p7c6p" podStartSLOduration=3.120185792 podStartE2EDuration="6.591593802s" podCreationTimestamp="2026-01-30 20:30:55 +0000 UTC" firstStartedPulling="2026-01-30 20:30:57.519296783 +0000 UTC m=+12994.426306252" lastFinishedPulling="2026-01-30 20:31:00.990704793 +0000 UTC m=+12997.897714262" observedRunningTime="2026-01-30 20:31:01.580623303 +0000 UTC m=+12998.487632772" watchObservedRunningTime="2026-01-30 20:31:01.591593802 +0000 UTC m=+12998.498603271" Jan 30 20:31:03 crc kubenswrapper[4712]: I0130 20:31:03.236567 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:31:03 crc kubenswrapper[4712]: I0130 20:31:03.237249 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:31:04 crc kubenswrapper[4712]: I0130 20:31:04.292747 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-dsq9l" podUID="b09a0986-54de-4b9f-9e77-1f12e769f1a6" containerName="registry-server" probeResult="failure" output=< Jan 30 20:31:04 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:31:04 crc kubenswrapper[4712]: > Jan 30 20:31:06 crc kubenswrapper[4712]: I0130 20:31:06.187205 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:31:06 crc kubenswrapper[4712]: I0130 20:31:06.188741 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:31:07 crc kubenswrapper[4712]: I0130 20:31:07.266352 4712 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p7c6p" podUID="66860bbd-60f5-4ee8-9993-f50afced0c11" containerName="registry-server" probeResult="failure" output=< Jan 30 20:31:07 crc kubenswrapper[4712]: timeout: failed to connect service ":50051" within 1s Jan 30 20:31:07 crc kubenswrapper[4712]: > Jan 30 20:31:13 crc kubenswrapper[4712]: I0130 20:31:13.304417 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:31:13 crc kubenswrapper[4712]: I0130 20:31:13.388234 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:31:13 crc kubenswrapper[4712]: I0130 20:31:13.546936 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsq9l"] Jan 30 20:31:14 crc kubenswrapper[4712]: I0130 20:31:14.695776 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-dsq9l" podUID="b09a0986-54de-4b9f-9e77-1f12e769f1a6" containerName="registry-server" containerID="cri-o://bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68" gracePeriod=2 Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.339655 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.386374 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqzs9\" (UniqueName: \"kubernetes.io/projected/b09a0986-54de-4b9f-9e77-1f12e769f1a6-kube-api-access-gqzs9\") pod \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.386499 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-catalog-content\") pod \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.386786 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-utilities\") pod \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\" (UID: \"b09a0986-54de-4b9f-9e77-1f12e769f1a6\") " Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.390122 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-utilities" (OuterVolumeSpecName: "utilities") pod "b09a0986-54de-4b9f-9e77-1f12e769f1a6" (UID: "b09a0986-54de-4b9f-9e77-1f12e769f1a6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.399661 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b09a0986-54de-4b9f-9e77-1f12e769f1a6-kube-api-access-gqzs9" (OuterVolumeSpecName: "kube-api-access-gqzs9") pod "b09a0986-54de-4b9f-9e77-1f12e769f1a6" (UID: "b09a0986-54de-4b9f-9e77-1f12e769f1a6"). InnerVolumeSpecName "kube-api-access-gqzs9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.449519 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b09a0986-54de-4b9f-9e77-1f12e769f1a6" (UID: "b09a0986-54de-4b9f-9e77-1f12e769f1a6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.490831 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.490859 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqzs9\" (UniqueName: \"kubernetes.io/projected/b09a0986-54de-4b9f-9e77-1f12e769f1a6-kube-api-access-gqzs9\") on node \"crc\" DevicePath \"\"" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.490891 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b09a0986-54de-4b9f-9e77-1f12e769f1a6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.711225 4712 generic.go:334] "Generic (PLEG): container finished" podID="b09a0986-54de-4b9f-9e77-1f12e769f1a6" containerID="bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68" exitCode=0 Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.711291 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerDied","Data":"bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68"} Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.711333 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-dsq9l" event={"ID":"b09a0986-54de-4b9f-9e77-1f12e769f1a6","Type":"ContainerDied","Data":"e300b574cf952a8782391b00b50c83ca8cb4eea39880d040d3d578dab9d9b43c"} Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.711412 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-dsq9l" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.711931 4712 scope.go:117] "RemoveContainer" containerID="bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.780374 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-dsq9l"] Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.783840 4712 scope.go:117] "RemoveContainer" containerID="976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.792439 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-dsq9l"] Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.814694 4712 scope.go:117] "RemoveContainer" containerID="a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.816198 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b09a0986-54de-4b9f-9e77-1f12e769f1a6" path="/var/lib/kubelet/pods/b09a0986-54de-4b9f-9e77-1f12e769f1a6/volumes" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.863538 4712 scope.go:117] "RemoveContainer" containerID="bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68" Jan 30 20:31:15 crc kubenswrapper[4712]: E0130 20:31:15.874157 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68\": container with ID starting with bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68 not found: ID does not exist" containerID="bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.874266 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68"} err="failed to get container status \"bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68\": rpc error: code = NotFound desc = could not find container \"bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68\": container with ID starting with bfd8835ca4066f5ffad3b8b024ae58d01fe7945f589be1d2be5e09a14700af68 not found: ID does not exist" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.874306 4712 scope.go:117] "RemoveContainer" containerID="976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07" Jan 30 20:31:15 crc kubenswrapper[4712]: E0130 20:31:15.874876 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07\": container with ID starting with 976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07 not found: ID does not exist" containerID="976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.874919 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07"} err="failed to get container status \"976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07\": rpc error: code = NotFound desc = could not find container \"976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07\": container with ID starting with 976d9f863008fc43afa59a6e3415726e01563ebf9a918221474ceb86ac390b07 not found: ID does not exist" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.874947 4712 scope.go:117] "RemoveContainer" containerID="a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db" Jan 30 20:31:15 crc kubenswrapper[4712]: E0130 20:31:15.875338 4712 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db\": container with ID starting with a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db not found: ID does not exist" containerID="a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db" Jan 30 20:31:15 crc kubenswrapper[4712]: I0130 20:31:15.875377 4712 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db"} err="failed to get container status \"a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db\": rpc error: code = NotFound desc = could not find container \"a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db\": container with ID starting with a47df34c775190f688cc7f1c16eed4d93b413318846b04037b6be50e161b07db not found: ID does not exist" Jan 30 20:31:16 crc kubenswrapper[4712]: I0130 20:31:16.261548 4712 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:31:16 crc kubenswrapper[4712]: I0130 20:31:16.332351 4712 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:31:18 crc kubenswrapper[4712]: I0130 20:31:18.552578 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7c6p"] Jan 30 20:31:18 crc kubenswrapper[4712]: I0130 20:31:18.553168 4712 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p7c6p" podUID="66860bbd-60f5-4ee8-9993-f50afced0c11" containerName="registry-server" containerID="cri-o://34c657f969e49f69bbadfa5c34913a9e48a7a772322e16d56417196828ee356e" gracePeriod=2 Jan 30 20:31:18 crc kubenswrapper[4712]: I0130 20:31:18.756505 4712 generic.go:334] "Generic (PLEG): container finished" podID="66860bbd-60f5-4ee8-9993-f50afced0c11" containerID="34c657f969e49f69bbadfa5c34913a9e48a7a772322e16d56417196828ee356e" exitCode=0 Jan 30 20:31:18 crc kubenswrapper[4712]: I0130 20:31:18.756558 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerDied","Data":"34c657f969e49f69bbadfa5c34913a9e48a7a772322e16d56417196828ee356e"} Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.157375 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.273538 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-catalog-content\") pod \"66860bbd-60f5-4ee8-9993-f50afced0c11\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.273701 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h72hn\" (UniqueName: \"kubernetes.io/projected/66860bbd-60f5-4ee8-9993-f50afced0c11-kube-api-access-h72hn\") pod \"66860bbd-60f5-4ee8-9993-f50afced0c11\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.273772 4712 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-utilities\") pod \"66860bbd-60f5-4ee8-9993-f50afced0c11\" (UID: \"66860bbd-60f5-4ee8-9993-f50afced0c11\") " Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.275255 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-utilities" (OuterVolumeSpecName: "utilities") pod "66860bbd-60f5-4ee8-9993-f50afced0c11" (UID: "66860bbd-60f5-4ee8-9993-f50afced0c11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.286089 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66860bbd-60f5-4ee8-9993-f50afced0c11-kube-api-access-h72hn" (OuterVolumeSpecName: "kube-api-access-h72hn") pod "66860bbd-60f5-4ee8-9993-f50afced0c11" (UID: "66860bbd-60f5-4ee8-9993-f50afced0c11"). InnerVolumeSpecName "kube-api-access-h72hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.327341 4712 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "66860bbd-60f5-4ee8-9993-f50afced0c11" (UID: "66860bbd-60f5-4ee8-9993-f50afced0c11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.375631 4712 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h72hn\" (UniqueName: \"kubernetes.io/projected/66860bbd-60f5-4ee8-9993-f50afced0c11-kube-api-access-h72hn\") on node \"crc\" DevicePath \"\"" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.375676 4712 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-utilities\") on node \"crc\" DevicePath \"\"" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.375685 4712 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/66860bbd-60f5-4ee8-9993-f50afced0c11-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.777387 4712 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p7c6p" event={"ID":"66860bbd-60f5-4ee8-9993-f50afced0c11","Type":"ContainerDied","Data":"347962b3eca0e187fc12475a4c71025c30d34866a3543b6be25b7e24fd264576"} Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.777742 4712 scope.go:117] "RemoveContainer" containerID="34c657f969e49f69bbadfa5c34913a9e48a7a772322e16d56417196828ee356e" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.777445 4712 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p7c6p" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.802850 4712 scope.go:117] "RemoveContainer" containerID="6002efdd2e7d758e4de87823c94b7f87639ed44e8d0da39836119abff6e336f4" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.849648 4712 scope.go:117] "RemoveContainer" containerID="24716c4dbaecd2bdbe6abf7b2ebbadc39bd470dafea251ed6a4e7ce61dce2cd8" Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.860979 4712 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7c6p"] Jan 30 20:31:19 crc kubenswrapper[4712]: I0130 20:31:19.874499 4712 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p7c6p"] Jan 30 20:31:21 crc kubenswrapper[4712]: I0130 20:31:21.808479 4712 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66860bbd-60f5-4ee8-9993-f50afced0c11" path="/var/lib/kubelet/pods/66860bbd-60f5-4ee8-9993-f50afced0c11/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515137212575024455 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015137212576017373 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015137160522016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015137160522015457 5ustar corecore